var/home/core/zuul-output/0000755000175000017500000000000015145200770014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145204166015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000142310715145204113020252 0ustar corecoreKikubelet.log]mSܸ~_5ؖ_SE%ݥ6n8IVjKckftX__e0d iq=aRoR,/7yâIVgi2fr|[7u)Z/ \65*Qz>~O'See)jV.#ʼRl' LB\Tx+&XfR/4 9m҈v|.>t&瓙,L xӭ~q M\'BLm[5o6/AUs,% GdE dRb"Kr[b-(m]?MY:5j/ Y8X ,'|2ȯDD|*,\MǩżǺ mT*wj%rz,ܟߙzf(DC sTLV" vf<`ζmE+_/A%,X7`q}់Hq3ŧ6J;%LI݉Q4jcQ"j^Bf4fҫ:9?s0AZP}W'&/az_kO=>a:2?ԕ(Qeo?lN>dn|lވ}or^/",^:%|[Zm7~#v_ VrhXy;?L _)򋂷hhT|J=HvBeN:>4Nq-:+P0[x!+]/.x\V'ohdEWi{l{$2E~|y:X$w?r{/-$0ӅRWV25YY;lG4z)M^8].ߕY}[ H }!I!wf]ó}VRDp]^Zm6HIE>5r%䜾F~lt+%!꣎׷* GiL#eU$%<}kwEӅ1}oB$ |jP7gԪOa‚%n*_ hJjlDhw[zaYdģЙ*%CڠSy$"iD W\j8H.&'/>*$ N#SY]};q֩ ^H|]Σ#Q!Ā]r".:@0"OnKٽKO4] "r[hs{B8M|Jܪ޲?Zd_p) 5G+렣z*[.!w(klW;jm|tqj#HF+q+63%|{|SPwWHb@Sւ/OLъ 똃SF峇uDpVV-qQk-ϮH~SF.Z#}U]̼dk2۾`\ֱa%J]v)骚[K(F;rB8Ӷn z iAR3~4cP@*o D7'tQ`Tⱕ(x[LFJ D+ˀgB $˿H*I1pwBp_z, Xxyt*%KZ=ڋT*%n{@J؎s$%' GtjFڗ"td'og'ļhQ&A8PbRA|$pcJAc*J'0(R#spUvZ 5/s},2Onm"̤s^I[)3) `O:mk`UwI}VM)ƼeYf&@1A57|`MR1=ߌ}5u̾ARl"x&[G64ٜXdpe!o t`;$56U`2vVtbi@fP3S)g3؄MO1{"r[XjeMEa܊3'?su" bVѲ/*+O|5[/hyD @*pi 8T/O?zےcXZ; QОT5z"ؽV&ܮy+-Kjp+J RK0JӔ@Lf2ns)ύ8`:|K)9=[-Zk9sE>W*U:&)s< ^5fFqfIN'r?0y L=O$8c㚌|܍1a!3J{@YeNbd 69y`=q&CK"k1NlĦ2M4WR@ql؞V)N C*ۭYŋ7A0o=ĤC7MyWCp'^vRm=͹iO?7dsGl` l>:j 11$ޠ…%3tVi@?!Id=[(hK3t:|ID#`#PVX \y ^eC0{tE74`y]=Hn$3!2ow*-XU\ߖٽ 16"a'(}awOZ,tO!qYܬ$g:MțXf{ ěE2e^!a]9c MU:VF52N*Ue3mTL7ŠMXoؙ$cqFj[oWK&you*\w>j>9@܈-M(>o]Lr ,xA1ǂpY@h4j?Hв L7N/@%S;1B,Wݫz $",s`c+]A2>,1ue1 58a1#XiObBh7_x4"M/>[-̔w:DkiB|vXJsXꧣR̅,K8xQH9 1kdiphI \Q+W]S81A|٣z}>܋((/U$RG[V`b1L'zX%ǿY?c xإ+N"T N-M^vJDҷ\>G$za߸]Zoo񞾡@wn)?_,b$DT:Q~$2CӇΥ7 In i|$s,Zjz`ϣ>>ywp~?y~+_7x}ී)g/PefDD(ԵCB(Vy"/0Y`lܷEG0x!;§z-# dg* B#M|oYx,GɃEX嵁 <Md;$FXm,2E(žp4chˆP' "6E-9S,µod!>f=:/IƾzhWBO7>ɫD$UB’4b%ޏ/+bxs!aS %=` jC 6P^vC@i7_lʏxӧay( PwZ&;eňaPeʁC9E(|@u9&asRh - A<'cC 3 媗XS@mIa`"עK'!:a0>&`A(Y 4cl@gOjT>UXS?clLTxẺIͨK ԖJ=u) -8+z6PDM^ye@jһbE zܲzjpҟ9582IxrC1{2[nֿw1[⠙*Wwneף9HYtTȖ#l CYےi DV^Q<cg4"3rf̷ś(.Ÿh}`0^3?aKzOV t<;ے 䮰&y=5";gT>3yUɂA$+&IukAdc~ ؚP1ztȚ{w=J&n`3HG6űHXSߚ j=Z4g F? g*mnW&V#\?-LE1(faµ&EEkkI"_Y.@?J'NG/в?pP9`X!S Cͺ0`(E z 7ߐ͛0̩z$LpG>dW5b;qwntS||s8ϧKAKs`ʣm$FK\ܛ#rfniYzdb,cXid^(iV V-#9,^?U툗 ]c-Lmgic,Us~+,: W9is],%iAӌd0^HTm2Lb~x./ao.Y R$wYNB,( OY49hքucr 1sXxozqka7`~/eUZL"=`E5F<]@08;w|QcQ `mbZ}rW߯ɤȤ5HrU1  2|l@԰9l䅑/w](IVpy 4/ 8Ksхe}uq 7eyΒzCQD蘳#>A&Ex//.$@y\/|QԕK|R0긛]flDWՅĠ[]?6|kite_!_(K'n.Ui1O߇Jn1k2¸Sb+Y[a|(h?/Մj@¹QK]pa.b=X =dUomTFwU ׺U  9VVw61;Lq}ۄ$am6gQXcA3EHYخSt!q5ee0~!y~|q]Þ:ҩT[K@ӲhV 'l\_=ix>.6_Wy2;ؙ?/A)Gk~]!t{w۷VqwQaxTd>O($@䒂i>o2&ԱM3"R.1|,)A8jY 4MZqwuHR+N(BKg4{MSod"`?IoU8[Su{ `6ugjiK^ucJ6&M!4uaܘ@6ommc Qw@d>,g <ohXwVSNcVpLEᶻuͭP痯\A{ygvur2l %4ɠV{p+ܣMicu9ПW&T_l/tǜêpnAEM <}dWM&KI?_=W5 =bQBï$E_¢gQ1;SLG~ŴNC i8}Yy~Lx0o͆`"EON!;dFИ3",;Nt<<:ۀVl yQfEV)`+jYj /OeʥXnɒ7-'; g x x{#kYm(bO#0"es \Ɣ 2 0ʫEU&.oT( ʳi&X`9T܄RqP\z|xQߎm%S0=8ghc,fݩgt5jwzPԒB؅ ao k:LxBknӄHo$Dŵ':֮nİȩIT:dGOZr~ ^+#zg]b&ChR Kf xj{~`_܄Z1-\BkGlX{N l,.Sɫ|W'j*k#(4Wn?LU_C~ OX`4ɾy 7N],bº~s$Eoj}z6TL2`Q:$7IeiK>s[_εgIbPHoh áϋE4ܛ~]&tQoXu H&_wނY7a:n}:9 @rK6ZPU) 0w=uԭLQ0je@#g-J+R:FRێ|+E.h*Uo$Cz% ~?H0U>/VcDeHR+]1ٸ6zDuVFz$=2Ӻ0#Zfd*2 v3CJubja {*Rz BWlm""=n"x31AB9̦sY' &p|x3Ֆn8H֊)EˎE.(kCkyR:_@C@ v mZAohJYFy2I(+o!CՉ6JiЛ4ޔ63uR[i*5>&þeKM* V3 r!`GM\Uom@Cnj^پ: ad\eVx8DzFos3;Q;\G&uTGyOmhU7;Y9o`UdsE"n/ՌhgJpѨfFY3:?Vm)YT!Ķd$2ghyN `](ABj:z&.LYhΗ<1Wwm%t=Tu֤iȬVnKJKʡzR7",gvB裖U`"B. 6L&Ӄj. t%<&-ePNΖL0[R)5P@f_%,٫\Rj̣@d/j#nF-S,u-7QY[WL0 jTis 7UKu\!M]'yo{dΙrۓ [YǾLgĄ9x<ycBt z Y1_W૿_]bRL5|W,ͿR̲gttgOޕFr$B̃ 00dZZ$_,{'3MvIΌݑW {JsftWqe!?*u%y}rzPחǟ5ϗ. OM銿w,?xgNa sa3E.~! 4r/(?.JYZ_aD `^; %98l_0EfsV`pXb]Mqq4֨mF{> oxR!c 5Y3HWz;F4]ķ{˔OJ|Ya q\%g[뇀ze)W阍t%o~w@}j ( _>_֞__<˞l[%_oJ0 -;1m 8mLu;#CxhِgG{,9jXR87CaNzX, ~2D^7=ohF8'5h& ԻO,0k'd jDN~a ic\q6n*p1CbAg5[n< WU|wrSwS.PRNS1ʞݿMZTӛ| B Eӯߓ>T$y|R)/YAE_ vG.:rT*omPBqP'} wAP*{x7<=rK>c>h35*=VԶu6P l ~PЮm(ySWFǒ옜D?k1\`A Wdjo4kk@$p"yT0n_tgO j$Nrp$eӽE}2 8:U0 `MRR  q[pU#fOG[6OҮeX0F&ރOKOEL29BW5`sʉ 79bCvx@ 4 ;9(CPpU# ®p& 2$PAP_^> 7WckzjrAQp{ܤ ֊2 Ls):Q#b}`&82GbYHLK9*⍌bՈGduws*SRVê\7S]V8#u h9e2R!L΂HM:KMr=C"H $zV:Eq1q "%kI%w57B(1 Zd ZZ| J R#y: ЀΚh5@p8/]6שIFھ!p``!k4Hi3TI'81&8vvf0"qE &eACI'8not-?HhIvpҾgj!.z4p#|9N,gHQxFc>`; L4fldtI[Ih*JSm߾g pQn7-gieR#<whLxsLGpzS Ws]ˌE)Gu\s;[Y-6zb\HUq.DQ;LƵ 4 B@bĨ6SW_(z.sӠ DŽgG#m,}8H:\WMp}r"hQRɣ- DfD+]c:v΂d)mn5P4%:FG=UE*#&J)UwDT^Vl6ssrJ(k%=}+;J9W_Z+䜜Ur5w œ7s3v'倻+r9Ja tƠgSFά$խI'+՛Mp Xn.@4hr֙SۙV# sD6Y/@=G@p9#0+Y(>([jj߿,ׅeVjaJ]>\4 0R6Qs)2CI`}z'Ԥ+bI(sFa0YgxvaͤKRSUԸ_&\ݳʏg,^^z1-޺>mR|9G'1!$MH-'O}07rT~jq?{?0u8; =‡VuH1r.m./~˷?-/ Sr]RN@?`֔O;B|L1톯:MlnP4z1.S Jc"I'K'4$ƒgYǠE'R"Fwu3_x7+W7QjΪE|T\"~R߈y8v>N%*DzMo5Ϊfy,7Rk|ٞw}DFkӹ vVأXsܚYG]:f+% ш${CLBG(xYLM!'[ƛMC-zwY9Vg:$<W &M}}j  PXDUH:|z&8#Kt^90[ڪىJ\#TwJ50C/2` 2 톓RTZ Tě:Q{5L5bUg&0pSxҖb@^ʒ#*$l@:2M-٭zX};W[2 ,2UUwpڃbI,X}$CBUWHr ~vV%[k`TOno7Ȍ!jR{GwGL,C X{@`5\es~vڐP?_7hjÐ%jQTc52(Ξ j][# ֳmgN{M{K@a ~W]p*L::UXMoLMLr%hg/]U(o;G1DJ8C$uɶCBR&İyr$r6un ʿtĭesUTѐvr{V {r'"q;(UY'>Mlcg[1hTaۣ&tQ*)9u)1A).*c"N+j$f׿CFSX'0~hZ$?YiDn]b>ZjA/qۮܭN_n6U%^Kt_^U9 (=ҫ]c(_泻$VixX_iU^_\_>MX^pTAuv~sLD{|X'M{+]vZ^A'jv;[ M"ڼ[#ĤޛGKkSq}}sƈJFIZaj4VcWS;yM0ѳH%@<5b'4_T\DnW>*v+s{T_ib-?N:S[H ). >F:qdqv^\GkƷM:66Wb!!#Bך!'[fIΝ #辉ؤAVdSa[!7&8:mC G(ZLZ $YNԼ;SIU2ǏÜİs6}/N$4ŃUDj M@8կqէs7Iݗ'Zz^ N2=/F: ̭fq܀b*$,xjPgu= Pj++jΩSv͐Yh8Q((K8k, oY2,sv%j%ĺO3rj3T۳kC[穵g5T z:9s1[eЧ9| BbVTM}I1ڪtMZ5⛪)jg6Lg2r|V53T;*1'Er~sG*:JXx{g8Ά$k%Le%;FɎ&Q;l˒N[ٰ3QX:NX&x&dtKX#ǹѷM:yBA|Mw;maי*𾘵AxrqʱptKN:M=BQ!?ׇoRsaVQaWM+i3} 7gOk}|6N6_.a=4AN˯}D[HN,KYtmǻ( H 7bښH#.IX# ܣ,SmV*&kjXFX.0kg58kY35 7sinfg4WCܯ7C-ۗrFϋpx؆Mȳmxud;5e='ci^ iB@{"0`Av.n,M0&,K)ٻ[3dђw565(wm3sΜcPu+Uq 5cڧ6T> ֯V#ls&ĐS4N0bmrdPԠ.i+n Ѻ.D8\%yLr⚠X׾Cd16&A)}KM+LGyvaΞZV='w=^)xmzg$NU:Ije5I2yBf>k$Muus<܇~y_1TM2"S)ꞻNuլE>juFTs6+ƫ]7Vc0pCxfEQzmx)3d܎޵hrPb+-DEfbT"u}24*tXk]kckV =MiA {֝-Έa,ܥ#xe-3nT;,Tc\nGέM.PFü_ 3Д؂oq@$sԇ{NX0,.ʛpsC[=P^߳, ]_@xA`4E3mni诮egQ55 Q{*85C>C} Hf=KMZ>Mˮټ3cpKFz]^  1,(yyW*1+7N7cg{q w]0p,\hmPl07k]f橧gxZ`Ƀn~t{T*4ﲑYa$|~PC]V45ԥfUlpbW␖d=[S;}XF&f46zصi+Z35eY3%A d=Ȼ2eVwP(:1DVnr6{6ẶhshNrp}Lb\< TxPZ3MBb]qYe=w09 (0&2DI}|;JOYFҬ je%N,!'ձ*QL)ARϖ>+m#9AjP؊{c`}[}ZIi.չ?IO$p1?q`y1vv<9- swVǛ!H> p7صiG"u8Ϭ p}OL9g>c1qEA=43x-sx,zJ}Ƀo4ceFX|z602u+N4f`Qn+P#9Ft>ǾG,ujdaAHѠ/ct1v^OA@:9y,F}ppĠjZ7>sUWy8g"d 5} f !&O1$YʤB*f3־e .Y=9\Ì}I*RrS%8àQE},\]95{u%MWץsiw\KOvÜ%DuCxM0,p6% #V-LbYigiB칚,얗gB4| M-v<. @9 /62Y HL1N@>2t jx~^LhsH9nHU8R6UzkvN[] ?[| ?ykPZŤ.lm LݟNY>MN~h)s3rgxDv6~ZE:/K;EQ^NqQ,,OGY~wfQ"Oł~/RdX]׬~ ?/ǎ.f3CIiMeffh6.HUCdwV i} 3>k-v\` uxrYFHF.'5m=k+l ws!r:uȕg˳|8&Ŭ0wQ9u/r*-/s 42{[[bTu:.kI7gftB8k&mfSx fl9 k;Of5_g 5@\ǮI{͂ [:jS6Y2tf!'qAOoTѭ.̍xmJGw+btԲdZqIl?: \gl k\LG^=g{zI*Q5}S-Ҫ($D;-꾸(9u('`;oހ1sF#No4m.|/31r`>׎sW&.-f҅s j=2S$F^`{7cvVv@Ii5ty.%{zNk9N# M`-˖E釻3}_f oC`rIǼvu'j;ɓ%RETzwo܆v='%2C4 (1Q$hX@45PD -w ^$ncҏ|rg1e\' ;C ve?:P"JTΒ]lDe:10$W[kVB 2$ y>:8m+R/{P|ٴ+*W3#QC(*WtG0gv҄1EBtGp_jf n5q2>:ŠH1-gnU* nj+k|1VY$EՄxQ5zX", <jPlOGz~MJZl #x@hp\Jp{ 6Cjhכ=5$u9I_-r~FjPh6O8.Crd:#J#͛,VZZWe.mTF_jN|{%nm ~QϰhHXp\c3CiMtQm7t@b[ c'm.9bv}ݲF崙oZx+u8ě~!bp!1| DTb[[x~%|Dhdۚ_IVi+e+Pf0$ SYEAkނ]^E'5.۵ytu :(7YRJP 0IKko|񴰲m_EPNS3b"R%6qvS疦 I]"hj!dR"dGq0m\5DX ( G o}JՒ%hhq~?wmʄW(P* MSaaLk*IZWK/!3ԣQFGw+1iZIx<!~91^~a/u2fںCe0}#M_L=~U<|]O`Th}}+L7lyw+ ^1qWKW>8E:?u~WPJkB@8x D|ol#jZ4R);o~ Z<?~m[|y1NP} ^Ed\v~4گK(r`^Ap:BYLox2{\<WLV4=#jl^v`"19+x&0 ~1fL_hEZ{ҏKWeЂ` 16&yp`NӪ4Ƀ ~}4^n5H so_*ٚ<^Ēc &VzPI)Ef^,޺ڞ5Ej\8pUٖx60Y?J9$ETgaA(65,TSHfL`oM>Ԗ /p:C3/&5v4ov+Dj1Ā1en@3lR>6Wu4.YC_l򢆔j}i d|I8go--Ϟ|J1i }hOgb)3bzaL6#f kפ'K)&eTeHs]ϓqQN"gz@-{NTC6ٲH{2Ȁ?T>Lͣ&Ouŵ[ݽ;\ɓy oGNob"䍙4)Q;/3P&AqmU} t6z5[]^q6;^sG'*f?o`;2 ?΍ʧe 4f4.A.!ஊcxTJ;P=&lp,{/@v;R+k@~5~t|\f>d> ;#-Ю .0ۉdMYܸc\jCYP$3Ľ⣧~zVBI,1`0"}&W5+sյMx޽ݙofwgbЛ8\+d_p9{l%aLÑ1CĵKcq;?6Q?^[ӥY=}홛 𾾋WPƶ5riT~{Ph_Kkpp9 L5R7I[m. fZ lE0ahD/Q-vm1չ.U_o4>GĹs/h#FGt#$c>]QG/$HJ3=fz"msDs)bb`H| .'VJ>Sn r'|kxUʤ 1y]pBr s(gpfq _Y+@ߑ*7liC (p(_")pDo$J rQyh\JmQ ,Fo!A`{8O` \A= ЊAN~^F,IZ'W\0 H" 4}n)6"0 택v$ zi jb<b撣Z  ^0)p O B20!}%gV[o."F 7%Fh >?4Yl5M^ƺo=j݃VZR;Nx2$u-SH dĆ;R㫫6> 镲V)k돲V+kN%Kŗ;8(lllH Pn0s( KUYZ(@JBJ,FZ2Aڭ 2n #EQv>H"W` ˬd NġmmHm8NX]Ce"2!(EO@w?a?*/L]QQM*(G5#N{iX"LinD;6J=ɣ(CG$*G9HLVTTKTC&2 rN*T&d4B -lH lBVW)z$1G:MJ%~P:-'bqbH2QH!ĘCC ,e*DdhAHݦ2)kӀ폲6J*n GEEπ5*D@ޮ hy (8̰0d2˗ĜOCCxsh)usɣFedZs`60kDJ)yu!`g[}n#'o~".PS0d40)5'éb]\)úb{˺J23`6.z42?4HkNT= 4ݟQ+KgNAMާ|Cԩ " [3@lTD͹uʤr/)SL|*X EA*NMTDt <(SZ?*Fx+8]") d@AG^]I9qdLXVT?E05 {0xDCy 33k!32 +h3\p>!0:BSIxԚ$56}ѫ|YU%G<4Kɸ[EBGT )D19Me#'T&ϐ݂fj%1IE))*iD6lʨV LFX`6:0|;.Rְ>Fao.̠L;6z%72i(%Zp5F< xS3'K|]2cZ k.~iB#Z}HXG \u>zDj9C'#GeRӮ*Q42,LyכDep$ uURs+J7 m39&~}yg"r*L~HݮH1ZDQ_WC03`*̥ڦO\w&})!Du\8队Baf!4BQ$~WP>ر׋OOFw:1>;W+-3]K=zitu3{V}^?jH?|TF?fE?"Z1/fC)+&ꅝh8my&]t!SZqLM!A`&>;_tI7-PJd|aWnT0". ~g L(i2%pyVnЀ +*ϴ![}ZV0*p*dZ Dق` tW3gPtG zP4YԠ@DUS?>bE˓f3\I_W",SfNum_-y%#zoWctWg#b]}ؔM'1^<_< Û,AF]J/ʿ<0RƨR ]&lO+kgûW:O{^ϛ?sFC]O3>؏#ͭ~aX̧ 6:s.O߬6Jos[mʨߟ^]]^[,7#ޡLѕ8e-zBTajRnDMBg/@RS5[F^8(3;9[=8bo8NG8bNFeNГ {oE%W0n?%x TD,!Z䫱OnAf iد^F?\u|; c_S&8T/ڈaژ%׿}6"6V6GSs0C*>77\V??t!{YPAW8awF~Esu4>w1rwn>}>>7yR_,!DmȞ_U< $4N'HhuUYMSf'9ܗs1|y=\&µjcZ g%gj8ՠHjhfH!4 ˃as%=.V4g4>Od2wRXsbkU%:{ZLJ%閍jAGrg |NkЮ20"v|K.hxrVޟ*;K 1B69G<>̰K3O%nqsbbr4|w>g󀴐pg+8./. ՉF/NrB}ygb2=VL^-B'XZȭv^Jyh9z1q5 GSw`1P 8{#=6 R钳`hoC(iVs-.}BwGl)Tvyg"W|&DG׉a 9jk<tG/qnP$j#Uy4b9W\T% c`LGD: -eoI{PJt^,1fEQ#:7OqNU?/񟷭-* m:͠v.Wc_Ū͊߸-vfsTJPGG!6:)hkgX)AP2B`*ͥ Qqf_Ah#J5P+JmzrFvKE0Ͼ Pݖ,)V)"K VyHU?ouǮA|-v{~Dp[s9|v7d^۳~eەy6hBA@eolNqeh9FG&}Ƅwa? N^sqPЃK:m.K?V.=:Z'2\FGqr NRE#WxHpkbDg驊`d3q%BK8A>/+Cq~Q0{_ըCЃaLaooInƿdh$A!ȫ,fZ/wkdIu=5;NL?~bHbuLV2I}ؑ f w7I}Fbe#C>~Nݪ5y~2Hri˥:蹈!'-c_?~9N;j W(3y6;`lph<yPg#GrL㹼v-ϼ!m~_IA|7޾tQеny~q|cl_T1R؞07łzH#"c2|CٍۭNs7vz b,0t7|9t9_{'ԙu&k?PeNKe-M 9T'&./}9lV5]5Ipc̐dFR$i-$VˑM w^ S%E.h#G3r&Gqwȟ_=$@\#@cZ]cZ-cz{=C.ζ~lY]_lJ7k=s+3Q3Fu+%Ը)ORǛyH5IRD1G>/R/UtRM)qH&Gl& Is2Eτɔq1nRiMB' M"EJ!bA"%V{bJ37m/ UaV{zYsRLo?b_moGc{SP4(`el*Ż":~w$V[m1^E@32eHZLj{).H"hm8`"U"H_uk2qL#G_L{V{|W~NjHA"U_moGk{xaFU|j8zghj{zL,Hm/H2+<$R;j{=VkHA_*HU]UhWLP"j{=F۫4FҨar4vz=ZvETJ4 UGV]ouz7S'2M}C2e W&]O uϋη6b⛽귔x`l7HѵiWsZ{s`)b|M-n}@~xviSߚW[. ǫ'pyxobCal8 M>8l*1Mש vhȆ1Fҵfsųv&\o{kRAڭӱ9e)e[ oAGG<m8'?-<=: 8J/vߛo*&T#.BXLR '8tӽ-qh߫vԏMNt HO܆Mܿ#sk[m14ؔ9KI<]Ts1 c }3P͟(}6/}{Ao]c&R ][c"o&.tmQ5b)v;z MHzt}ZS%ק C'@Xrf2C ǽCJǫ|n~lOڰ =YS;Ao~h'm?0ǑA^;ɣ&F3C成iDrZm| lʄ4>,\-)WdMz 6VnKK^^-0wJ&1 fQYAQ2Fu2[w1s贯udaHA.cH%KA!ޕW"M]):>@-u_R%(F']h_X]ltƾs[$ m\4)WŬtc9H9 QuuNnuB9i3F"޳!b)H1I "1_PGX+/WϏ6UH"B굉jͮ_Lu^VW4 8ARQ8T;ʲ:ʖ}ū5:d,k([R +|Ekw۾+^W/H9IAVSS(rkAD#z@d9H1a_v_BQlBB=,' V2!m rHFr, fРa aLS,g!bb5%҅8vu)@\ͦZ]R!͊%Z^ R/%_zÂGhÂwčy~< !RSc2qS#ԋ١0; TI8:E@A ŐBj#R( :Hqp5@ZB8}ͻ'i "R+(Aҋrh/ե}²H@iB};Zaɨ8*gu:v]cOI*Rʃ3`C:WU̙k< j)@i Qai.R5~PX:O8bv/,Rd)Q):6/Fû^,B.L4.;)"#H)PGd8 Tq}|W-vpE1.˫YAa2n-BNʔ*58s%eLd7S<*ZBfh1YB^R=8',d!T!o>gX`ۺP`Cf(',|U`pSqL{#e#?dѨ0CAѰ*%$ WK0p_E,81)RR1&T oWB/gʼnN,ѧ,,),ΝOXW%Q+_^>MtîYB+ hZNj-3ۂW4ٝtw3S\^QQ4MW}Mvtܪqzꂡ HբGikFlitGv^'"'.IMϒg`8YtP+Wp%!u $4mE @ (:0):͑s4Gjjrrv/=aAXsT}~>#<,)*1B`] TM5^]{ SJ99?#H1ZO ijyNv)[z%c.GAʑ]Gv 9yo31o my; 0=gadZo|8cr7bջnv?]_Ʈ҅ye߆g?͛|w5&CͻoÛ ?H8~/8ٙvvUNgM2<5T||MDóy!=V?=8w ^OqP sa0@;ծvٻ6r$4!Iwg'`X..#`+h=lb.ɯ>k_;ܶś ԐX>x~w %d:.eup0HvK_(Yx#+78l"m^}LGɏBf{;Tt>LG%ѣ:ԣ*^H}쒯ݡQ 0i`|WP h9:TVe.#=CkO f @o 8;|)Iџ:ПF9mgdR%sVtagV@8э/,|WI!~*CdO $[sh|aXʂ'hI9mmPfU/ͷhoeh^FꂨbH.HrAuYThbM'yQ:ϘE"Bb7"+AB`b| L3RY'>Ҩ8r!M/d>[!XQ\g=(.T$1+D׽$COMaxF"_Q Q銰g(DD9.8IrPDs( g 6h|U"D|ZO!"d!_u@ ٰm[أ f?#~y$_$evI= N;7lq}7ijMD*flZЋf!ܢ ϻ>3<$]ͳzېq0j Σ5&$fòYu1),4~8Kq^YB\:ۃyN狵u/7V—5!br3tm@c$[,g?(?Ā\`@,"Y0_j;}T%_lcA>EHeF*9Dc-rap+m>ӊJ _0 eyTĒ0 /5FLH{W#iFNG@wo {y{鯷mpIle<X?:|y[>;F4LWnK\S ʽßSf"qfؙY?Ljawڼ+C4/-onځe/4Nlp6'FijUc@1wnVnaa֥}̾Xӛl X.{[!9C4,1Ƈ93O2KOd'gg -3ƹq.k.̓cZ%Uҍ"0An@q{g5Ϝ R?v=ɠnu, ϳlc6*f҆SXͽ;ڴKۮ|~,,)FJ5PGڜ9ʞ`|cϟm%d pF@3+Y5v1. 12CثBD:&48--T]p#LjTojez1a9>`m WƴshIzj6ZVڎ~Ѹ5vͅ.ZmuVRaפKʺZQQ֕!(-(jB-Jj6S4yi ;&! 0m- Y)0SA ϖƇ=}*حsxyGH_e /?ҝAuHmIĤ8&:Or XIմe*O~$HFY"`;,D LDPJX,0 LO !V"aJD0?s6I]5&OoLG)T0k:yĕې'LZ:jM.-&_žSٝF1!0}jo5 Ab sC& q9-yDP#~Z%kO) R'tbL"έ9Riέ̹5qܚg&ݖM!bY)&!j̳`ec&pecZ%ÁV-[=a0w7Tx.]jh#zժ4V z6KP}3?Ek-'#u+R>U뽇Z9p5=t:V;:s.n!76F%o₩gK,s F[=+#3^uƄPΈ(z1Nue%( -xQ r# bd )X2onL$f4՜h)f(R%3 H*U*3CLUsgR:gT҂f/9Z䔱SA@ ]&8}axzMͧΰ8mY6]n9>#xN2j"e]Lsȱ=̊iZUvU(5z^4J͌{Jp) S"/ BRQB *}}ȽFOjΉ<Тڔ*9Ӝ&sS5JP/ܐLJD+-4Wbxs3E!=l' DV@XiLa8H%NGeḚ%M]"R3((2yQ|)ZQRCMD3ZBr JJ "#C@80{,PeP2j9&5HfE8*wp @Ԥ2^Bh^䙐] AcB*fe)J2Lre.aqP{M6PMkm串CXRխA$keQOFHʱMB8uelOWߺ:U7Ȼd" ºcMDђn8\6iMfLBi9X1JʼnNT>de5 (X%ܚW@xՅ (XėAKC\iЭ5dlQܛ6{ n/ E .rw>p^f"ytDR1JbBeA^"%YZ؅ɔWBGe!PW2J %"N;cmq)N2ě(Ж 0i@¼GmT "PopE  e*6KR&oQQl@8 ֡!&X ~57ctX^6TqYrZvRZs棶-V+J)o |9)F d0 W%v0ϓm rLȥH}jE1 7R_![g$_vgs((Q1+̠Мࡥb тmguӈŹ,E"\d1e'XAJ<<& :l6i_Y"3@@5 *5T<'.a7^F__T2| z(:(Rh}D z'X*>.Jֵ@um ;U42AU+`GM]E#*8+/ N!9V T3N͞\D6 LzwAڲR7Aomth6ORwCU9pQ[ u;c}i<<-x|='Ć>@> W{Fd Jt01n'SDPTv:vvvq $R+CpUvZ0 zng l8O|=IbZrXA*{&~^ N %xy K9B09wWcMҤP#Ak2J #%ğvdQ0[tԔqb=Ru2A`3 4Le"=TC(1J9ǢZ  FT;qQ:"p_4m%5Հ@P)%ChX!<6X cY(5t֟vC Vyub?XRcKmɆ:a^7ؿa-ξZr}9ԕ#:y\~j˘/K^SZO[ҁ_βwທJ}ݒaH=i\F䶮]pF8y7ݬ|6YLosCuC_\|ۺW4?-ȳʗ#F .J~ Ւ֨@*Bm F5bLIM6(lh$YYRv,%rJXa vtq>v 7:x7g?%^D\l_-U:FD%K1k~y凟z[f.p\O /ow_:YU :ޜ˧Jyyu}vjMjoE\FmCN޷7fye^:!8פ%Tmv!ԩۉ|N< /Ȍv j4pɺ+:tL 2JVgxFgG6iW Xi,v`y35UZF,opڵ kyX€wt5j$`dž'j<\#+v,`juUVӺS!aF֠*xrSP)6z!mՖ oC\4:7CiǕyfqgnaԐ_{&sp c^'=fvjke>!:5ۉyX!Isazw`V0Ou˯Ny noвu>TBnu#f?Nۓ;mt6wgϗ}z<.:g?~>삿<,p7ϟb(l*u{_S-oZ0j$:9KJ5:27]*PuP99m+eoϐnnV+e_}o?'؋OC>2r_a˾@GYS4:tpm8` eʒzuҡ+L:SYTp˿TL칠sy@C P-몑7ٺ%zzg뺚|ziƨ'/Zۇ<ܨw'wۜ緋N h&[M"ZO.2fr׻|w6lYq @CP5ǥ]nj{k6P6xnjcaÌxs3q"~'77l`y6Ū`69piCi!_m$ ʌȫc4m [ Ҟ=Aɒo%QV5vۢ(<yDU].SY~t:t: !\?X}ߚ[|k~߻Yp/-qf> ?M7Yͷ~CȪwjUS g4Fu"Cy|cX& E߯=rl?G;& >ǁ~ʑ3` IID휮("CܳcoLt1-4mW%P}O7*YVjoEbՃ7:VSTucRBXN OdmV(nmknvz)a:]_mrVXn8g|WjS0[וLASжTzoC>?7E>VE Ǩȩ+SILxkV<[{?%&[i?GlL\4-LxY ko*t o4jDɶuZes~jbvTM ڨXbOuTꔓj\Ef!:KH/Ғ4>*UG5̵XahZP <#V&QoWu_:q-(q%Ê GkA0=GDWQB2J7vsu]?x8jÀTmY;O≔q)PQGGIm]~DZ*竓l 9&W'])+k]3$8XG2)gb)<}CY0,Z͈fz<|Dy{F [m$`Y[gYT+!`zmf +%gz%|Y<&!c1a"Ra5&!BbTr7%2h|ֵO r$mM+!0udBaRˎ-EYE >-I",h7FHYFtgn:ejȍYhȷdF`=b2(JajsHg!9(k\}ƩxQU);Eg]a!SIT#b",JS GJA5H|jֆRL;?ϽuHDal? WB\` )*:[g:وZ!Y3 :Bjlt᢫lQ@ЬSc(IFኍԔVtXR }c~ޫ6hVJZP7"D iLIUO.T"'2T2f^ b7"[`Yx:QOuܰ b=f&jRW5ͪRI(9ث$]E V6nJz:Ip8ZPҡyǰjZ0 ΨY{hQWQv&ck̛+9 f9;;9VQ8sP593j#= H^g Da\CVr %%bN#IQC# 8 )i# YàYYk^m-'{18 Z THe9=T"ǓW@HH&XT ԐBs%3#J(b6Wr," Q;Tc{ۣ^Dq574lbFtV=FiR5ʱkQAvI\CKrBI^PB+@׳>΂ mە-CSF-`Af@ܤPՃj5)XZ;kXjP.#J$Ut0emW A]i>b)Rq}l@]-' Jܦ5l/!:U吂cmAufU_eH8A-#IQL $IAUN"溨vvVk 4z^Ői;kQ,97|AYP| /$QaiC^E?BϠ|kAi03#V Pa[~IAhtԳAWiZT圝hRdQEN L JT~Mq[M 95L6ҁ.&&:0"q\` 2 'd.ݠ=EW#Q@.C&tT.(S\5C`Ƃ|{! j6Ͳ]_!/һEzsnEi8:OjUegi;AL=/-+ +RT(YkFY"3'W * %3hRBs] u3@y.awӳ\EV#Pʝw :H\lZg*K,`#J3*Z[HzWEW˶)g[D5.W}1U,k54L ]7oC ^kd7ءAD/P`}&ņlzA(i&%{f Ԍ9FP6ɕ}bqmxbWȉ(j=N7|VdZRȥڽ\a3qaņ0H:V<$1]v rJtfA ; ?J5HP i(-{Pn+dڝ/ݢ˪TΕ@*!g0BM'_#'"AVȲNcXO_o0191C,bs СCЌLStyS9+Ơ~$Y' (2Ƈ"_\Rgd $ҳv(iObҡD))T(, ]'D rOW$H7{d'닾ی7^MlW9.S]=2VqdCݎo];O~ Dp{ Ƥx MOQƁ(r~g&YzjSΞ޽-jfvu}H/nѝ7.vo/n"o~[{\}vۗggH/hL=Cn^iwk@ܕN࢕xm{Sg~d=uq)NAswa8uN"4NNNNNNNNNNNNNNNNNNNNNNN9S%7z>N6A8u=:2:J: p p p p p p p p p p p p p p p p p p p p p p pͩȴ$q G(={aégtx0:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é|:_o}?W'?]VP9 _`~Ԩ}b4awoj?kxjsz7? O[YȗA*3,j\飼}X7IduOU 4+Gb} r5Qc),dM؃\XP3مuSX Mx6 !X^JȞb 66PQQŀ  k" k#yR "Z>?o>6Dcx)I!FqOFd0]X?lXJ͑̚ Y X'}BbXTQ:u/UZ)V4ť+(GL k3uJ6/ln!`r*,eE7 `52XБV,,-*V ;Fz!`3K4s +#햓Pt XzG-r#W %(;#k470>^ͪ]oyKJ]*oj/OWm?ሷ4\n找R)Y]MWMaz.7WEiH׫noqhPB͐GfE? H^sJ-l Q; .0>,Pd +&_ K$[X3/:yI:=es&uu][oc7+Fc6@`Ef'$ Ŗ-K$_SMaz w3hZEÄg<^@03Vyd}$FhDod"0MgWHĘ^ޠRnlfzRI1E告W+7 hAC~L\)mYIg{ MR\̒,E#VHljW/Ljc~`/F)`#HHT"GرP ap~2-k^fҫ> .z b̛lq+I7~dsna>EzSk5+Z,~ 5M Ԏ|5~>؜}ͣ9uu3Qx8(cɵ H ,D]NktIcZ ,4G_)T8ܴ-,h".EPt0=Ҝ6«xdGSp0_jƯoV+Ni"N+2WeFTçRF#E:⍦̴ǾkҞ~NHpz&vwV]vfz 4{<b^ L\m-Ka 5{[(=m56(3q neX9K*Ez_Ɋ"N{KR廛v(?mMvP1o8=ŴfG2J) 0\hKx@ηr=^߶)Lu ^׶ft&VG,Lqͮhj1_Kz.ے G$l| juʲ;1i*!&鶼4 q\¯m:[͛0н}znoi2N2#5Exkˈⱏ|8떟Zi}? 7O6d}b1O-yt#Af6q?eľ>V_3ӲF6F]6\~^7Of-lͣ|}&O& ZJGOuc!Z -}a-4VV88 ok),W&q2y+>jIO= iӢf0J8!M)T]7Kwpt|2 }63ҟ=\Q}/A0}BjmtPw`W_Fף(FǥXYg$|W;:Mz7jZyO OW`91ۨY@㺩_Sޤ/~e#ϼ%KbVHOѰ!y6[P&}{si0Mfh7'=j:Ok9=Fr$h򣝝~";-9-&kER F=~aoI__k }w5WwjBH3ӽp M^FҚgX/y 3rْ@fXPRw6c.f5y>Nї{u2,Rk"[ U(VU˙`s[/OY3qP- ,+'Jn|^)"y#!FZR5>&ȑT.dӫC{l. =9f|BN;UwD;yy߼wՃsA~qUꯦDIͶg[ɡh,ry[sLzqa)ηMo-zEō&$9 b 8HDDtO$HF|VTwcJvIlQIDɬ<ݑm `R*sT_bGEj9RĚp4T-oCBw\ uFqL8;+mOk}yfCXvfSyLJS-&q̱h, - o8h6=\ ־9Pv}{bfɵ-HzlJiTUN $RA]8h·=Iu%KjLZq7.xk PZ+gԢ"- =+{;u]ŽEQ\lMakG3zu(G򷧥,^3sNI:߼("//8ZX~b*{ᨇHlAcٳodI&ɾ]DQ_vEҖ9Նzvێ<?< ذG%<`i S _l}Ω8ģR*4p?>?%}. C UsM!Ha#Z"8(c6Q X^+έ '[KsA{ζH"mά*$S#ghOӋ7Jsy?y60@Lb=26VHʝרԣj"S±Mɸ K|x˺Ϡ{6o|u0Ӕ5k[ );]s|`RVçJ* Dq.,Gj*9;CMpvR2ZV]#Й;o|<#cC8 vA^\C HHHȲԊ߾Pg Mg[%Hlō/! ,n3!HPc:' ɑ?swq6hvE+gxVbZ]\ET!DS._ aAd]Bi%I`ViYup'W 6)3s0˄}7 ,w@w7j矤 ~ӜjG-hzA9‹YA+X2qdՀ\P#Z !FTWZx^꛹VO(E:Pv8_ps, &"R͐4 NEV3H/\R;csYlPg7kR8lAr). Q`U 4Z`kIe[Udm>,[CVL_cd.uSXJ~lB&hK x+̲%_P_ ䷳9'Jg;4UfwD`m6f6OE\7a=Ƕa|<۳?JQyWMSޞSFE!"ӞyBtB|JX&%eRV2QM3MCf8Gb qȒQQTY-},X^F-Yn>gף*g<- ĕWHX!f(ǝA[Xe۱`Ef36juL(9|MX;6_ywP~rl)a(9=1Z `TIA̕o0%3VojM v{Q t4Y"[PTJQT9h=R8JS ˼A@XIJ*QQLxr a?HH~ LU k[Xe[\{UCb qҶT;HI#5 mpac 7Ϟ7sQT%>jU^kʘR֝2~!W3llki A*qmEDkk9"j^l1 %ePV$0n#~#W.EΦ-}D*v0L'Mn.l{6wV\!2O-,^H-.ctrnQGhl5`G {CyK xK+!;cy9=ISЅTR# KH p%QZ#OtΝ|E[Y3ٿ#blt/X ԤdG%si.*ޒfn. ³ ^maٝxfDN r mDǂ##*8x])KX^V%˹]@e5D{T0iJ1sȳ `D;Kjd,+ޒBce}@Wɞ`9.FӴZ))*JdP P ^H!>1j]b &M*0U/){xKV+]YJ< v1MMv{LsgmYQ{s~Vq4J, ɽgR1J*ʱ̍F3 Q`l",x; ܡvaâ-}0Ňt&wvz|eVaѡnrpfq"xy;rA.>rJ;.ls-}Qu9;MwTG)&TKi6ė0FqF[]:걓N|oNG _9kgEt-5hpL[ &sQwKSAx۹0qBD+yxi>\_ZBܵ>7n#EwRw{ݪ|M*rHz8mVkIK pX?tCxt7Ff10Ϲ+mS鹫sq9FK8@k#gs] D2DH ;Z#)KFq ӨgB (f3\3T*@Y 6NӌXpN7*b]RDӶtQy'=L߶2!(CIY'FUjS$ά#'kѶm?E̢hO{X-3~ul}] o;3^no"|X৸CZp :-8 ܕ%e3aFߵ {;5tPz6Km/{mY>$:'>hï" _ucs++ x40`џhE,F3BtWȪhTk3F*mTtX 4a*)_YIPר ֩tCSKou(#7|>:zE/Z:6yhTnKF1jne7&Ѩ6گ*J@App5/}Tt;-48)ͤP Sko4;ӃZ(p>P*af>^ǏL`垅Pگ|G"5ŀ_"" Q  10b' P|Si1k%]JAF<̻:؛+32bZ}c#-cd5Sz)uK#9E*^_DSy-bZDx,p? C ld?梀kZyWڍ_ˍG!SԡX}Uç$ >bEUR<>{2z^Vݘb^*iT8w[5W- Z[(+n$.B[3{ԨAF0 #d]w ԯV131.U5RgFhďtG Ai scs6pffH>F514cTSsg~.E9Pp2!5U;{;>Ah`]<${Z.R!i,Qs}|̳DMTDŀ1ϲS55pERx1 $DJ.3'Y-Lv-t|X:E\{*P篫`խrՂj;+HG=O`+$je$#b{uB+vB+|r)X^g4z<#ڷ3 >PvĢ _" KI,#iYGHjܑbmB ZM|8-s?wyX'k-7ET䶻vQ9T&1aՌ"=+}qU6)1Jı0brcZSFF,qM}=|^_o~1O;^0t0<{w`1{;?(ñV" ܥĎ~+:pqOOq ?Rg5xHJf( KRQ hIPdDn6Q;x̷F9g6f6 {,mbDkE0^2HG)eu| ABJ[Y,sJQVG0i őˌXӮ]O*Z3r6{7gbF1XA˙ь0t岗js368e*oPԌϩ** %zv%A#SU^2qGu-"f6Bf6 >P\q)eԻ:h^'PTΩ*Bu4p1bϳ>ˆǜE.ep^بB(gEѺhpnwQ3hJg (1Kn{,%Oǜ5b(nFPZ+?EJN(r q" uG4M)8I-,|c{ :=_X<(@GyYxq""iP61QE`Vd/Q?Ӻ%f|[fh*kiӨ}j4}d Wkcr{?\;2䄺X:8TB*d扐62b4u$e Pa븐i~}{g`-3~%OO 9$(2)2fy+7/xfM<-w9v?$i^է2,as*M,Vg׾UҳAP)uOmL#>oq삺]wr2tB*>0~%7Co6U3Gil-2LZ bQGx >PɰG @:*юJCE>ݙl꾇u$JR0_R&4aK+F3sTGţ4ڛ >PCrwxKu{ h{CE|6"ŤQH4!R|5;3G%<3Ǵn(Б=~وo1%=0t(Jzќ`lW9ZfD(q[moW˼]PSd0L3^@p##QJ_1k[1̓OUd«_q6K1_V?<`XE\Šk)u|);^+A~~Q`R\uc8:9:Z8zCe1<9Q$N듓'Q^4HP5|f1WAϲ(L $Qfr^4SPHdhx*9+*6b]gLtÜt$=3.ńK=pEoL4\p$u~]"ck= ܵ_@G|eNeXuuJԍhܾԍ3PYN yk^)q}<@kN;0`Tuc I >è@ #zڻﯩܢ7Y 2S\$V8s$rYa6c\,Mox4a- uZԽ߮<>^@pkn/`ė{ W!cNz8.,mR%ob#Q`)6QsFkQ }@~BzՍIHĸegG̣4*i‹`3̕Ԉ 50Mw/J>k̾HoI{^=2gY`.OD"=xr'yJ\R{`&[ x<5q9FK8PTOWN3VSq1hb>mʁ;JY>$e1c+"J :N`"Smhܶ-PBɐzBt&TDPQB 9^#S/CcPvăW6Lډ3M(SXp1<:x̸Ζ}^dWƏtS%H#2w 3LJv][(PюG22 5|f$>cJY ӨG U Y{.ϰ*6ܺY2:x36"-mܭ/XfAuto)O`)_f?--m'և^[Zf%ⱬ&><;KYym?ߔy\_Y dQ%$o>cX(7mov.]cD|mw۟qn} V/>킔>l8 ~quk/~vn/?\7d8P[և+~խ0g`mx^wͮ q7v_N@vsAvд_ Kc/'?|{ؔG5bl_A_=Im;J@/kxn8ͼ&\xF˄J9*0Y &uys⌉DߧSPoyw̨~?-aX ]͖&џכ^?E-)*]l"ն(.f ~tw r탶?=n60T}l7M[>ժq9\!q#}5a-?m'K?;Eշ~C׫oH,]~Wy`.b-ǯEYŲOwq˽W/ ]y~>U?]G7ɋuì1/t}k)rY_5i@}ƭlΧ7þ H-}|/ʾ;%ve]/Xy ڍPP9Y@.73QX-{9,[FE<,^Bo`fŬv3 Z؋+hyz"bwf^0ԛ3</3qʺpЭ# xnsx%)­Bug5ÿ bDDnLHD[&B=OQ1qLb#H,Lx,Z]㾆k+JSGP&Ұ0XO&4sőˌg3vTډ׬0vUݘbiF&Ԋ7X so`EZ1iy§/bE)nLQ0ixhVc0&hՍ˹FUsũ 8)XɵzV4.~8T7XvĂrJsE᭗߿9߂Ü~x(qy\-`j#Rrby'1TR+{ ^DaIpgV'GcӅ ޤ cq8x--%]⻟_Tonw"xxKѬxrXV!r/٢Jw ^E$s8Ixv8h>GPk-O4QUB W>s#TP9 $xŹIf[ąޢ͐ }v~ʹÀfHՍI Ki ?RGkBr ѪF,J+{v{O+%:xbǑ9\l`m 6`{}$T%uUuf))STV2ff##>"/ư!$_swL!V ĭQdw *_oyqr%ډ#3g&2$ecE dWhO<s F|G}ȳUy1t c͓׀,1@5[chek8!"㊼+rl{*@^B!vHsu-mn/=yegQ$W;(As,shh³߿K3;v6F|FU^ ?2~x90&2;H`~~?^W;y0ȅbh4;_n_n'tpeԚ[ӳ RNDuhdQ"Rt" a*UBB<=g2j!+rޤbګ8](ڹviuQ#hsK}XyZ6ß$BLKwʜo)ʳybc0B'S3^ %1HX ۀ|uzӸ &;lhN6ar 3cvƁ| z>!0FY&gᐍNY%sN.-^QY+1)ձ6Z 5;?ZdeuS~^ 8!n'Xpl:.熎.I$TעX|)p+j_u19/wLTIx3Z$E TYLQ!7[U =;ۛ"h>qkw619dhl.•XE=8Z]yiy\嬮*y7NM(f8ו:~l$i5ui,DvwF}c • nѼ@ဵH$ *hvx4xBvT\ FYʙQ ω)HnRg/m(b4͘l*.J51\ֹĺۃ| Qn1ԢabQ3|7ɵP>ss@^ҬUbLw?4$W9 p*سYeFP1F \&7ܹPhntK꾝 2xʝʝ|bZwQg1E^#>Xpd$Ƀ\vh_ËvԇN?yU1g;Q`p EX4$p/.Dz`@Xz`PJ_%@Oja7IJ5=+`=הr0^loY̏{{==+oi #Di* d7%|YAJ^ؑAjjU+~AVˮإʥHQb3:{.Ӛva} ڝ5f0^Rv8 ȕ"dS/CruDW9T@ T>}BRgO7cWc9;ry c>{~^~h5Pՠ$!C#wI 'F R1vrlE]p?uM0j5 ƉkxzҾyP)䛚Ja=UZ*KX\+3ժv>Z]NgU59񳯋2^~BЀ F!c] IA( c -CyT>^Eyy5 dM!b8: ..ˢvԇbB wÇ7T w _ }$ I1 "M1}H,8fsK]c `P5A佔 S-C6A>iFWr8fl oc T6EhK#H^RmU1C8t#{C܏]uȿB</9&<pȘcr9rõ?$}}iR z>4~*[tLS)>1[$l,47e'i;>VAmiIVQΖ 1cA-8["hM148Ka?jsUS2v]p7 'm54:`:%k8xKdA}Yԡ`k>[_.9ft2#55Sg=f.sLQc*H7"7Zt _yOr<=el9)'8#C'I ~J1qK|k /;(TD&aN8LrnbZ Xz/)0n֙Tt_&C7[HpV ̪ B8L fj55Yy> u@0g׺o׺ UXOB!4C`%v "T.vY(oVtKxo ߹MȔ}h5D6zD4N(YrBUbnc\x2`}[Yi?< C)YXl˷yr1]쉤p#e|XX^&OA W,޶qPW({|^ߞIZ֬nl ,]qcCCQV@}U8A5 5-O<.YhqM'\\x7m?Uĥkc$z vŀg'F$V.;q1'WwL$VkXfKjsx~"bgHs2j+y74WwԠ< ĕW)hN\T ˓PZyOj& jԴ=C> Uj SE);% *7eS/ZwA)-2&\BӻNѭSܩS?=5iM1r/CqK}ZVaҮfw ~jx∮Y%/,m1 Tɵ%6m)~U ǖ3k6!t,g8E!YBrmEHLQyg@'A}U!<_ߞ~NAwE *[4r?RI/ImqG\7m8[4rhh1=D}ʦS0)`iez"rJPEQp]N/ʣt@TN `rRO~R~O2v9]*Hv'f)P]/TV٤S(J+6|oT 1͂,f؍͏u~V;ҿlkhAŪnJ-uX?|Hg[׊i+jݿu=>mPGV'u2Ӹ# ih jq0Ub\}CiNPװAӼ^=hw?_fVKg%[zn׏|/jL(8R3R$uZBp2h%. Tg|0ec]b6aH98_E<]V?^owbbn%A|Q,_qk(2L2}JG/Qu=lYE|Voߟ:aH.M+2!E#4jq7^_˫O*i?d4~yt;[z|PȨCZKIsPLWa\dmyYd/nlߩ/BzYRb9"{s1sEƧ5ltphB" ۴74%(L؆q8mH =_7k51cpޭ-+HquGj""h *,z]6Q_Nһ(GPE7 Kq.^6ǴG|]"Z soJg3GoZ?OTj\ngѺd<'KWp' d4{!ʟ~mT%|fWݗU)H(K8򨑐*tyeUˡ]$S7ZU|_Ui],u]pGZBޢ|"D܊`<q ΄բ-;#;p/1ïяh`Qo˫!aa ɴ@l5g3ޜyGs<쫓bt Ca $U.48u<h;qN_|RL6ނNtp ĭ=LoYDćp&C'O3'p|^bù'ܢ?7[tG}8B ncX|מo#'բ{B- "nh0uS$a22k;2Gえ'hj RRP [>דk;CLWb豶ǚL)&hei_4 ^C6DŕHa$=qkh0N-ɘZMć ĭkF/rN1 M:]RV"O L3r^AoHmr5Snc; lI4Nppک Mޞbo*ߣ@W.Iǫ h EƧPtY[ƵGѺG0ƩLr]B^l񭜬V.`m`,mu j-ULV7GgӆcpD4ŘB1H MW9͊uE3*qV$o?D,]^J?ɣJ~)! QA!N0 d\/_A "U& g7%>QKc4jip:bp9ǨEF3zcU$\#ʫ_z{w;QZD۲ sJ3$$e'~_s 3<\l'؈P&{QE b 8"ML:!" ἗qx6l' I)#̧YVg7[t8x[Hg͉.E= d`-F9:u;>yTuQcGw\E0XpjlbunH2 =F @' 'эr")-'VNWD.3%|5Xe,VPt,|p ķ F ޓ8ncW~Y o IN)NjK}|U*S$W2G=>NһYѻً4>ݔw}o_!m blh!I'6NkxiY֞?i`h,Pr]p=:UZ36qh*?%ơ: 0^o{e;/F |lЗ#`^3–.>@؍HkAI`yE["wdȮ)VOړS_Ԯhғ0v໪ئEk񌲾hrNeTF!KxNPNхf8ƀx0@/A,u w25 +Ad+^A^_dMN?OIr#.wN< 23QKčϝ8 ʠݜ=dҢeHs+BYC'A-v95%Jg-1=/ׯwI{my5–zKpInARayZɸvTkf@%<6 1p!}1S '5^[e,ϦӯqAl¢j+p!i;3rFpD3Ԃ, \zjA@ɂln95ێ-j {FKOΫ;Fwi?R56Hw<{gi H=(tGD A $.7gEi&o%$=g :}v+V#`!vPyP1ow4Eh!&s H '}p~e(輸&0LgFK4k$XNKЃi{Ƃ#޳I $\IEK)sjbQWe>E-s#s8/^:d (:$=#GΑSh$D!TFƺ  -FrELX(H$TFUS[۠2a{E ڨyIR`voVn\v̎s2ެL>0 -3B RLFPu{x^1C/3p}E'8 e&=_.W6 qn6^Ip#TFTA\ɑ3! ` -{gl΢Gܶj`ȕ$Hq X" ހ-H -`?K ZiMd|6's58A*(~ѣ:dy#Nhŗ"/'L& i zF !JJRfpK$ytK(I:-tU`$ g sf\ sŴL)' ]DXX#@$Apg+[_dG46*g}Ϭ=8"nLY|?x ৒Vq =qL׏-Jg>gsN}vL!)9NnC 0=ĺOD>)tP3͆K? .SD,&i J`38*ObNzsڣa뫝a{7]G𯑖F5}DqύڅCUԅs(fՍjb ~;,N~ϛ߼ܿSͤ`b/pGJMxzY4鱰b>Nߦs·`pbG(O]g^oG?ԏ_ qգzo֧y `vT~Kl>VQp&G0֗^B(7:wfSWR܍:;7`Yd8+2!uDydU3FY +ñ.&RSw1R4}A0< Yev֞ wd} ""Ioѳp$dƋ؉/,],ӕBF:p-JÐVxŔ5Bd Ϝ˶MZ8,f*^IjK+eέt49*^@$A t[}-YpÉ 1,8  @$o3r4>㬔-808bX"jB.2,skWL 5'%f5)ݚ[՛e]+wi rـUwϰLD2.\޸s'~єlw4ipӎXcu[9eLcKS4ll3:X_:t%'^AJ,`bݴ$ ({}^-8SeI0`CXC Z*ZQA6Q)Zv * a4@dnG 9PǓ%WW(V [W(z.-*!`?> 3H&_߫Y$gERНO7d S}JΖuNL$rKvg?]jLDG$⫏ѧ!l͜9 ՘0zF쨎9aD䬛LoU]Vg)d:к\Tpmjf潯hGh V to=bmnlE9΋PD|;+2n” _o믰A ~[=Y]4"JDMZX YXYϼ? _i~VmvT{5*8m8wU/c4ηrg0z4'~x5YGG_0j;ͱ`1G5aTlcjVggn1[umB/1>E5^$}+٨vO+%Rݝ_ #n,*>6XsE574#9SL)1:X-k:EeS-~w,.t 3y Gգ0ox6.VjAGy*hQ T&ܛŪ('ޛ}g#ީ}tam~V7u5'e .ߥ~2,~ʕYv;]fk},TrH78cu"%&5bj|,ww"jkjSpၪ{,}J;)6 o>Ja3833#PGVh`s3VQ7G}oƹw<:βGXhl X-&M.-u ;q"xɨC[2dG ]e`E*EMՌz ;_]z&60&1>^7ki 4i=w3Tf%(j9ѸM?(!/Iu'-l7b0.\/epv@_EZbƜa?0h5,_UrKdz7k8@n& βӽ)M5[~S`NVV0zFŃb8+L(& +/5!hɏ=#KrCN% 4-%/"V7)) H$ l1R+-As !rxvDRŝI{ƀL?7D P4CҳXi P@A;d09Թ[xs.`X(FA à[(Dqo/r4׮ 㤜--0zFNF7=eYиs0š=FTZ F!4Ι(yqƒ-hp>Tc<$`~gdra4` WxI{^ #;z0]ib=#׍fS{X hk8bZM@mI \jtb^BpS^*^<õA3=1F'^W)2Qz$}R!;r}4:=-61 :$XYcdgQ|ޡ vJU }ToY\6P. $:T 7&qN3^2d<QǮ[El%pHMp~Wƶ7QA9gdPdXvDJ4jΞ_>Cg$}98۟taI=:WwHIZ`rIk(͆.yI{ YADmI<(ZZzV0v-iRABD~z9Y/: Ezv`$%N"ŵh;l8\sghri%Q2M T3ܮq)M F!8?r&zQ oO -n:\c1(o.jlVR>ֆ~?fSd<OMg23Bv3)?oMB(?K ^JѐwX NSnw tA)IMbϩ'$CiIֆ %6wg0k {U'#Kp~9u DD,[uFȡ=MU:rj+9=$}\XݡZR=#'CLȣ}I- !O8Τg!OFJR䈔$,<*ثhY{Fg]yY2f[ ֡`2 ͐PU}pqS==x!Ov8"BZ` zn<bK̍O3F2(v kTXGA="gԔh~QD6w J7$h9qq؍&8gtdzDXa1XX!j四$,ߞD=;OSY upZ4AYƋZ` }[Cr CD(MIt ]IrsH;G0t5 a ME_Is__raּnv;GoC=Z3qY#xYu oNgz͕Q5 /G3]<߼hZ=/P-EO^m/foFizqϫ07UӌԦܛm~w_+:3`wjSQyunr9gMvyVpiTLſ(oH̓pb-$WsV`^4z5SLHfQiM'Iuf~36m0EdH __}7OJ `Go(?FXn;D Ղzdȏu mr/dG#9|̙{u,f*V<@#2E(,K1A402JhͬbϾ$I"6c飵& NSOӲ7Q AtdqЇk^tkwTH)ۚ/|TX*GފVJf 5Q2+);m(ۛ.oUe:S!ץg9(a5p\ߞ3kg66$TA27bۋr3j=~d88v_F(" +bJ`Oğ˙3"0|"qs^7ٌ?O3s0ë/`pͦK(_~ cmC n!<8 KtNVp4o|&fQt=S_6|z-`]ad]V]yq"?;SneQAaXxGyV|z}~งw9AȇA#_fOm``]\mhSn尿.NbGR`؊)6tLFgTἷIg2dhǃv+p*@"4)jcHs.a92N[KBpjH$LTCyRC4Ef3lev n-㬀c Y'9('.`?!gRᗑn_U-;MV94+-kT_sKo~4g3oB+$>2J LJ/(/–RdR.?rnQ-siv/W{t~-vqomɶ=!8idErlZʂ`k剒 ;:ʣI`9iIo/|-dL\͘BP/ "^rԞ? -Ο֬2?>g0ƺ ÙIƎ(+ܩ[GC=mz9?2tn|TY ĂȘ i({O&ƆI\ JtuD w]%Okj}T/ Xf (x3IcBh%Ǝ ]694]_ /)yĊ!nYLxYYU[콽T G{HlټzoRDջ^Y8j::ZN !i?AZ.i70XÝjPLhT*GǑ+|;q'w1 aMrW^4+ifwmbzSv4RȮ*VHVA^yM}8ٿtIsy,7.65k?^fifQjبbP : ժ;HQ)ţ7h8lIklP%vgc3EZI^dVn5P,h6Q\co# `.}%R8FŖXpw qWލ>AHiOZM`8*eL3}־oeb[t#6S#lζ&ۍĄ$_oZ`U XJ^!S2#X" c* e;WRWOMqq>+/{FݠŝdZ}]lժ|鴚U|?e΄qa5' ) PUR25f wN9%g[Ns>q%wWH}S9528ɻiO̤\۞H93PvbB5Ġ\M-7/oOj'?دPgMX^|ߌ`սaΛ.U_]f);.a z2+ afi U[2x޼7_̱1|Pb п0[\R2].5t=21yɖ/|g?Jv77FzfQ/kۛ=f^nl>BPo{$f0Nf5 ꛯ{3K0֋`z jOThca* G0"OJ2ĵQd#gpk(2LCԓb/Ny )/Qs|-"ayWa#qRd:なNl]KK*_y*f8ŏOm]p5xCMMu K޼?Wђ5T]sOOzt?iƖ&KeO&] {;3I'ӓW0 __5m<]z͞>L9!|ݯܬ .1̵5!nQ^=|}5ͫ!%[|{8SzĝWfg?=Bx Y` o7@Yj.^hn%W\4w?}mPhg]>oÖXȩ ڰ7\`R1_>&vzXpq mR}CZ6  ȳRLWaў1mMkCo^;Ն,FvQ MFsZߡ ›M w71Y!.94;K$=&Y.p=FG=(IgR]D0YMQJ%.V=N*!u >Okc2'7_l$ Yr4l dJ&Qd_dYV }F^ !lȓ=c\ )E8ПVoU= mᚱ&c0aԝTW8ԏ#FBhi6o`^0 JאFxKh牊B,*/,Dڠa_-GR(xLUj4iMοm`d%S۲77;q3ߺOJ\N 0t$c3O]|To3eբ01I4F7\˫7t a@+].&NUh%mhO_֯VZ JT_xMU/0"U˪@{ћ`IC^RCmX@IXmlak).1s^(|Y\3A6V] 4r|ӻzhVbW1x/c1V@£jhIum 0F6ĐCsprYV"tl&J'fKalQVvu~*q6]vov u31[ԯhK~@;U KmW1㭜2hKtVR2Y?vZ''9KVBnpqʌEN*4  'a~dv jN׎"w+~X}'QMMFMeW-3C}N[?`3#l ђ m<][;B6Ѳ5)zt.f FJYnTa.p ,XRΞ1^e.뿗Ųކqn08T-y{r^$˲eۥˊD;#.3`kN񕐚ܰe|(cIp˓m&%;@ r):qsuӶ{야aBVws& \fe=EΑz0O>WA uYLgJ29Ev8rI6KyfથJOAWoVM[_&R3j,Y().yc)=mEZRc>heQb*UZvuћ1Պ5;pa%J{/I)և#M]lоfݗKݡVQ:9]WXd|!v>/Ԧm6wɋ,T%+YP$:XEZ] r/zGJт7@6&mܖGCȱPs6'@bIRkRMǸ< -]pc^oe[ ZmvR.)kYlڮmbbm|gVn`Q`gG6LJUٚ&WVїYK.i̺OD2<>\gȶ2t+ӷyNgo3Nqh8q<5={wui+ FɴcEzWM(F%9Վ(PpBN6|Wdq cB<%j\4[ZO iwA3!VݧcV>X"9,g #=%;Q6g̉l#":Jj@Ѡ_O #8-CVR x%8$^%IRZMB @g,F4 7,Fpa6vq똵4.c>:)˕l[^[ 3ayjay 1pe- +8eaQ9wt)^WTJo-L(ﱖXgD8 hzi 0 dv"+t 3ACF=isopaN(s6'a][,úI0;7~^&#jY tB넍ZKMIw߻xB;Ĭמ1J3ǴΜ'T#&`LY5 ,lE#F 1 3ϜQxa91kla-zB9k#rYcy/v}ܞ1 >V d0Ĉdstdjgl/ʌJSt6N3ɊsjFiRKGX ŲׅC1e:B bŨV Yb ?`zD]"+Ѩ8*D ²Q]"mnjzSS3Ύ %[s\g&K mrnø 5s$Mr;aOW+o9^>RhVIMe X6F<#NǪ< o1 G牠$}\NE]`?%[2Q0v=W2K䭥BV\` 1z0Ҍ:0,^)8ƌGcaEMa>~$5K緙Tm *Ϯj%B_OAm)8۝LQgF@^(~L>> Sl ~>>'p?w4|?|)B[+<}[L 4~N6Ο/*^}3Et)S~iN9NV$c" T4:XGeIp0BLЀ=QZ"V\Fc &KL00tV`ESb4DhN6vrmJJ?{FJ/bo7A^`,{s3E=Ak!xeu`#**ʬ %|@@a[ uӀ c„,xDŽ >Ha\xn=x 40OBm&D,f{xyf':hkHJ!B/8%cVyq/GJ~]K>\LA4;IhI)*YD|9clxeײq6v:.*uTn]1ݯ"OH?W]Əܮ(McB L?N~3tKpS6o^I]d)TM=慴fO0ʕ9q4O<O󋋓X ,v4BA`~:njڒ6[N{g޵YSwy̠`Na?&ʝRxv@Br XNa>p)+QPD")# 傰p*i`_EBʩɘttDBJ#T`ŐEBˤbȹ`RsjM,%=_8?Ղ5Vͅ+JAEFY%{1gƁ  BI#@8(;:8X)W z1uxD!6 h^/y'mN3r8"Јx,%aRq-FʱcQBADjQ0t p f`U9+UN|s| U&{ 9=M P͔d^XX t:h4 m1is  #[H0rƐo!y2B{09bIL(hī@P"*B0a덣&7}-띣3Ş^لo_RN0eY-wRPQ^3GbC)x`8Lg{!V䝄iof.4|X 8´tuAw ^w{:vQ}.53 jָ%zԷ')P(j쭖{ "d8LE6ԀKjPҪۆڦ?_ٲ2tƾHxHFb`)B"8(`RSc[Q[F>Vi4JNY 2b!xrX{8QhW@ȋhSJ 4%NwwoU2;D4n!je8p`ya B@s9oG칿Wם+z+ WhD.XMT;)IQǢ))x["wOvcUaNuH0 #ML E`j` a6rS1u $ 6:-vQA@TpdND!-cPc*f2G?v;V͘ilq-k_j`&sIs U0.LM( A79,J=3[${Iio9sh4ctw>/o>uZ`j5wϽ5,B6ykQREK*>xnvծhJbkxaJbe bl`"G&QJ *H$Hhs'\5͝^ My? iH3XkŔ' ( d +}l%w~j9lk4*4m6[Qm{ݙtkMmZK!(*gvby &X *AX' H$FHȯ/\(HȔwA[M&(KCHohw(IEPr4`g [D@AIXL4C'X(9. l%mmT'DCÐ& s}v3-C3ݼDNl gL .߹>7r3b{ڝ\' tNFY^fFuat{jzˍ,q(CWL*O3 _nL4/Z/\[P`|La@y\Xf{S_/Cs(:8PhyEYIo|EGaZ dOȤ5J;Uqnxn8uF)7Tˍ]T{n7oGZڃb8'd\K]iwg7H ^ Q՝|NV|R=ˁ?.ߏ~-?g~u>s6⊖ߘ^/ vV?߾,H~w-|1.vθǬʅݹDηry_ϭFt,wXrx@j#lg$)-0ܼ-9jsd6d1>3,$(l$#J*1pdb 8^^0 # '+Ctk'|a0ACYHLe6p6o~x'*lƧNN/x|* nb ] ޳Oy̠:.a,_ r8ok8/,iCa:R*aBHҼQB*+p &Pbˀ=H%b#' $yipKzY^4lmuo K/M5LFn<WrnDۺ@UKkAdTy'jZhZuh24PbBәD1]zM: c-prY K)oUe} E1 eE3=rA-l6&K!I% *L/78:&D$?^VLsiRki=+M[AV:+"H玠S=arj 9I(d$Z8rҔ7vInAE!@1Q*XqE"S(_,_IYT 4X#a@0)w@+g hJz5 cl,Z7u͵2`Zjţp$JEJ^g1C"0^cXT-}YYAPfW<--J|[_ϋ7LM[& Y,]7̓~ 9 B??4m",4RiMdHϧ0(0 cy+7S4}C?iwy<?o&IL6׼IĔFECvc,;+,XDmkz )*!^:8 \ (9>Ӣٿ^Ow)=)&[49W U_\tO`Ε9kTΊYr})LO䤼=B ̉95~uOo6I{ӴwNL޹Z֒\㷷t-+ajYCOٺ&Z+'\:2qY4Qp۔'֨p%sü<Ue«sA;xA o}{ynKik>߁{-KԩY ZbO{4ВathTg\[' >øW3qUMz7nbW8_wSDtXQCqUfsސ&:**%t|"8)4?' ?K{\;őTeYv@ArwQ7AL}KUߏMNtut3?Ji^wy;eR]d~TTd;.˧&(fTFΌMtd0ʀPEY-|骮㨘B]MWuIȪbZb'mPd@7.U@ki,Bj(OVnN6`Tɲ *AaY (PP(AEƴM ϪdQj2E~&' ȭ> pa~c Jr.6 rK3 FZ$z~nݺ׹B'lw[x)ԇ[S1dIUIRǝY$]'|LQcPsZO۞xsXEoGЦ(;'xiv~Nw>J9r)}L0A_&X=c% 1FCo"KؽTQ֔{ye _~,Ƙ^M"ՄyQn\X3䶑~~l)kѫ*URe*URe*Uf=T?. %N,_+yj3,`0U"!ԙ ZkUuYgLe$ji#(ppe*Wvڢji?gLd#Nn+ۗkaJDTЕ#xȁ,x PqIHYȅos3°͈8H?.S;nU /ՇS/7Ⱥ<˛򛎚벦B J0΋[4HH?B9Ήe<طN1 9v4xu{9NZ=sK߅SR߆#iނIC& 1G6,~;SGb["JH Tյ|6\YŰ/cr%z0A֯ƨb*KM8Pjr| FStBenE]ֵъ ѥo ѣz]-U5vgc,h7NO/m;y Zz'F<6ϰ_j&ۂ\w)ipX\wݓj#l|wRtjZ@$6/:}YFwו箈.v}smw)ʥ7OQ+˯O4W#G@u W[| ֖-ݶ@k,z%D-,"dIYJccJm*2GǴeڎRjhaxR,cn=>2e TM5W: kɚIYy|P kΠO9a%^>\Q<>(7Tb2o.h:TUsTѣlJqُuAѽo%z2*xx' _w {zaݥ= π^':+T|aQt]2Xwc 1180a|;5w& &SD緾xVčh&lĨKq(mJMV{fkrgfif /эajF(b^+ph:}azX\m&r<<+wr>Cq5m|mŋ_Tn(u\ĝo%ߜn/v:@Q鷤˿Na7nd>۸VAZFnvuڨ8s*H}/neQ3&q2aJ #yqdބq#"t /!Z_㔼1iMWߒkuwGO N3d:N6޵"2,>` =F InG{hƻdr6jxWz/i}Ai3*F>fcv5R"=]"ܒⓍ0 j+2󻹆 Ϩ.]#YQ::~z@̴iOs)mq+Gsxc[~ʐ\\xk\FA_Zu._Z9Oߋ\`EF/E9̒&@iI9}J0D1K{L5|W-IUjb0#Y#9rȇbP49kjҊd$ I c-{M#,gyDm2iYaqv\\iQhm)t<6M<י@U˷JLHǴ&sK5555!ἘԌr)O/cej L.ݕ|UPDW|U MQl"UuE- y QËN>9خp 5)P4FpX^P~͠F c|Ɠ~ʝ=oݦwpܽ89w"ExZ&د5o~w`yj&ͳӓ4yy0䉽%GEICaƓJìO4.V@658ek2[`Mȝ˘L,q Y&ԏ[|,kl;z6N_l8ׅp}f02-_y0 mڞ/I8LZסu[Ĵ<v4eBq,HB+pEMA6K=eOf5-hbIl J[d/Ѭ4yO&e L)SOy{&Xc-aCwM2%MEțW,n r=/㈝2LU (B*WkˀuأwX@Wc}Bw, RQBd+:մ2D 63Sr=Ks*D Qb*Eo;@2lx#=ڬ/r<~t޹ ;/t'G@aE(%m$?)w 3~E~Fzj,[d.eʊL4uQo˾jc5]E G[JG~=d>w) ؑP'(iɸkEas{ܦ5Vrﷴb¯!dxYE7ݬ'ݬBfLE.@󮦟d8iA$5 )P*ePfEa>ah3VE+&a'gybsgqU:mRs]wX'v<i:i " :qx6 ,X! YR3AAђHY^0iSH}Œ$zz18hŠƺ Ip&o %{|<<#xlxf.&٨zTpn{WIthn|,\Iz=o(vmVAqR3fE?gs7aŒ&U0OSm \6&ꁶ0--XjIJOuˣ L*X7}K6T_[G u b($DRXba ϟ6|ѽ蜷 +8aO+X^&,đ%6%\(h'Ө)gs-qɺ0jI#J:k O5L|5zʊfȾ/&2KKT7XkhOR!\oR^4(\羚!)3]f:P*ĩ6eZ1W rq=O{zDb 9=<&ov܈]7wˈ\ +/ 0AoEh3;9m|v~kem?-FOD n<8l&ڌ tt<:Mʽ::xvBHg`O?,+ȹf!=vN@W5t%|qz¸(k?\xA$t@—~oZz~z-{K2\ty:is^9':i_)6>/pGc&Q=e^ vHͳnTɁM7d6wGNvnHucZ@t,*:4&!{M*AM\}:yb'BqOpU):MSom$_X26ޓpm#WXX <r&o_A[,ğ?o$%JeD{ `0}QvOt󙢂4uw}AH{=GƙrsAnE#l)j_W̄?plú"9G.MN`??xSt矵agg3Q^՞9ĩgrxpy|?A]` >ᗰp-X=)]ST)O"GГX\[hk,>_y/`X ~.R[w2==R>7#WzW1x/w(؃[}~AƁ/渧v}8) XRz zVJب@]:3C@A-`tʐV ? <ˠHhG gvw+ x+Zs'AU#X'On7!"1{R'/n_!0Ϲ"9l&6, &fZ:t߇WB#KO#BؕŮbkXNh3O7uv w( rϴ504C7pͥ$(lDf9KHK8#}BSVQ2 |i(CهOt`YirD-* n(n4dNNjyCz>lc\nr'pQ àQd5wD=HV?a[)oJ#:hsOi93)-)Q% !B$E*^~! ~]<F*׾H_Y܈9gݖBcYDy]:(~aYMR#NeF4a ,/8-k1a[6Cj J͟VL(p𡻶j|[w~[Q&ؚa^i򀦰E:]PYV9g_T3W/+ȗ;h4\wTԈڞk6_”| aSiL4Lk ~\!ʟթk0״Lr-װeA`EƐ5 Ճ0r;?gzE,aī4<6juZ ~<䞎YSS? e)t4tDW:+n |{ut_;X7 +eŀ[^}׾*e Vъٛ6Jy픭9fn N"}1 ?7]ۚq3S6C1׳m蚩ۆKtea:=;`zٞ-};URSe;/yv0`~1;X;|i9!q<6iHkpn6p4ҹlOv3HQ@„XQRQsCi^ڂ0~4< !s%y&ԠN`2#҉ jjܦT'4uj nDPK:VKã1Ⱦu[xiٻ'ymwOǯf^i$Mb;N%+.:!xOAL&ڬBא1h#msG<H`Mrm:ֽv}ٚf,'qZ܁E^{pBWdE\AUQAUQAUQAU CMEl宥}}~cB9%X̤E6&Qfۨ8M/'M73d^ARD$OF%RZckN2]C4LG,]8#t  h0& ;Dק'{w΋'D0}dpOn mhs֞=SdR?y<}|}(y/`?/ZIK/<:v]t%l1Gu "ZFR:Qfhb gbRDbRd>? dAL/%t͕9X^dњ]g[C .EaևqJy/qT(,>^KU /Zeͳ/:]W,Fs4pSGZ).A#O$E >U^$\۷Qu Vqډ VW;fM6$Hg|835E% |d3dNO.ʓ'ONO>ְT'"x:eoUŰIWĖ2rw%鏙,-zdAMj8VY>E?K%5+Q$~K=ߚqϽa\cSjN:b/Fʈbq쒃z%M)pv昛Y=6Disx0g!A.4HCOKZt–7%7,C`Q+T-s{Qԑ$K`8)SNҼ|CCQ1%øCG1omE!bMkDYBۧi̳uc5f&~6ٍ .❈ Hzс*BCRw+ʟ?ƒ |sz 3"-6cj)(sLќW;j6dnm6g0nzsdfBXS߼T@q3Ql?س !3lSs+!KjQe4Fm0aF̋(=aK) Ⱀde)ΜzN 2{x/%1/L^:6y.%kVd6u)m0󈡇Y$B־a+]KJ#[Հvg8Bڻ;5<8u&9} oݔCVGK/DC,I բA.ŨRDc @36"he͡0 ^P1= GYhG F911w݉簡_t;jL3FTuͶ-EU xT "GL;Q'MM7ҀMJA{pthv/lK綮T7Huu<Vy˵e.Gd ic1ol!]ڞI'U.r+Bl0D"S,m>+,pv/m3>xphj-P룸uj36 ڝ4߷:ovsxO>F>v;ǰ~ ī}tvLm)^I _:akRDƂ!-)^o?- ,,Is268,Ӭ)F8f\*p ˧ٚA#-4'$ w f tn0ӡ] /ibie,@VRIms Eӳt$@]"Եt6q4 6ȵlAH< LͶLò-yHT\U8]4T/S3ʞU7.[v|uݳw<(;*EƸȘe/"e,ȊY=% VR|,Oſ?ߊu'n\7q&}{zz􆃲/.{R*mB5%K8_~/@`jGă(ѫ 〷IZk/OᣗG&TSTנξ^),sQMV}.rzNιrXJ*LYPf]%Yq*U~:|7; Rی[isJֺ:?nn bc&yS,SK >3J[6XZ9P扳,aӉܼ'|~/:a8# Tˮgy@昮0eLya9<-%ܡ,Bq<aahn@=KIQbĉrJ 2_l2e&#=K$[ -eC3V)vt:Hz(kMRNG^W!:!WǼQ l:wQ[i(wRv3iYFS<1s| i(!(nrRΥ8i?iʦ@FTIjG_QS?{WҖ-n}-_ζdp qN0&'4Ś;$4 M71¼aSF9O2s][CjF]lg=4,5SNhò֑ gy Nٖ|͆juҨ6J ES%g`vnݛ0c?6\?~~*|󾉟Gۭϻ'gj5m JcMlU6HkK6@4~wwеpi8]/N{-/' مG$he ~LiV38wLtRRDkB*_f(YN,j>MzJԺ}@-yE;()/xA^H@}} 5wZ{oeTպtwastتJؼ, -ֈr#5BN*/ gh*R}'u keVX^>9}Rbb#+~'*,dɏ N>rG}wkrdKЯ@4=!Js%$>L(13 OIیuҸÖ!y*4_b+rDIH惚 es_xW&Е9ITc3N~^0xpP DƒS^%݂?Po!j,}y/ 38-屴$4ƲiඇG! 5C) MU_Ji>U,p5O擉5_>3%am:3N} Jrz#1پ鄌RK1 X4UC \j軳dz64SC54=w#੥SV:EPur0TMRYVJ1VTW<*sw 3YzY̶tơ*821}rABJWqZf[uZi߇-Ԥޖd ?Lj9a+"~%Pe 쩴V+'KD㪭G6Ҭ"?z]e?HQa)*E-oJTuxIVNg5ӀޏjSq0oT lX+K.׊ Z)Ъ WvF1̛͏K&|E&&yL_WQMݎ|'=~J\AV?StrOQե7Vy>|"c>l2⍔ђccd. D@9β8ՓmWV-r钊Ӌl` ݋2tuw$O`!i]UY-7xC)X9yJy2$l`Kă`Dd}T,z0`gzS>':]E~<'HmtgߣQ,yyy*9|޶W|Vy0X ʜ--\:iOI֋D@N.8ڋXt"pdPPΥ׎(mny?;M~IV'1gcFqQ@JqdߣVufBj]Y`A .Tn`54r2|Oy1:J _:QQQ(vۮOiќ'Ïlw~qrD)H _ͭ% dێr8SoK }{5y.Ƞ:jyt9# A'iĭL)Dϟ A%G!yB.FB+öo|[麭ɻ#mqH@U'_ё. !!w3%($ 3%y.aju_'8r~ H({ 28ό\OH" dX*5,& ^&AśӘI'@j!? Rn;_u}}ɠd}@3!hԓO;KQpFa桰BdgM'\M֟(=M aC3D ep[_y쇭ۻ{?N2sD"U+-uu3_$s D(n2V- wm]z|BbGh`V [ $F(J'@|X]g$IXwxs]5s ba ܎VsW]!cF7@µ_dђǪ)4f7bGY>xWD ƨw>u@G蠗ZDyQv~EL0%KC,2.I@%Jt@`BE IgyzE:]pJ#0R3Z5n5u;Br敪 PʭrkR6]Uae`sN"'W=oEWQ2h3ݫ߿=?i+LK]yIt ͗vG|牬E~g;uȊa![p} zn㫢f \;(. 8}:4JˈhqGo'g819ZCYgɹW<鿓vGN+% ES.ӠdjxS*ZHi:U|, -Y^5\ϰQC3L1 И9kְ t۰\VŇէԣO>Q-k)6Ly&$z|sNd%&Jcwh:ix9gʋWA ۙn nCSk֡Έ4u70_ 3-wO7/_|o|67Kʂ^3 ZYlRuZ }~GZa*t[tA.QdL|VbvEs@d]Xyy 4㻫n,}Fѻ< ፊIv+2ܓ͇ EH X|#hx򊒦.XC7]P*8as1 W_ݺ:.RâCX\|/GWlWllM [cFɖiٌnh\ cNkG0ۊ+{/Ï(8JB.,4A6Ζ&2NawƐ vj糧,]d$OƸT¿XypYxx82;:8e<0u'>СvD3nuXfw@;xeѼ(}o | 8:!oS!];;bn>u;U<-~+d<(.rgb%dۿx fd{iJTW<4f-3/K@i^kdcpZIE]Шӏ(),!ꡒpN<ф ;QʳLGG4񨎚4=:'p)=e,-@ݎHC?@@ ;0}W?s :p/e6?!.Pr :5^or>g1  ޖFx.ᦛ$KmW|0[5 8"ا =<-kSvTKݵ]fSg9&Rn4lϰԐ^:LeQ󨡲>Ms%ְXCWnQjkK)}9="?) эJujOF%ʪe΃zz67{ۻ{᧽ݏVso`w8w >^?iO/P7z( &\Wx1E9wSuxd+_hD^:- i!fO0תCGqpVk n vGgnRzn³ɣuji6lnN9窬/)aÚU5 ̫aR[QS^{TcԿqh*2ṽMQ<nsZ>!D6uNhB i1h-†a/x'JQ UC^&}vwamX KS,0{'ljf鱰2:uÃ)k"Z: <` {IQ۸xѬp"~U w?m$_sj_+04 C޹5ᅀeC&ߩn]6&Hz>]]wNPsC+kߢ;9 ku\*~S|j|uCg[4s{M ܮoZGUi2Xt/}`4~k2Π^$}Wd\uxcnOT_x~ `cOFg>|U(<N`MvP;CΌ{mbgjEfY񟢉j¶gg~J;TFq"p(V H(r{EGeV~M>Fq׽uwjjCءY2ImԷ^|j_so+TUNTw}:@#.q{nN˫NJ Ë?gG'uu'C:^Oчwcr1@iLHxKU*RagZDAh_1Sߟ< SB="-,218h,@d21Fo?srRO޿?lgݦ툨@ċJxN)5,7Kjflv=㣋el9wh :Ρ'CpEQ&]̝{c;bLQ99~{17"D$rj)IQǢJ6P="aphۇ|zCaX)#uVa0sliw|x'͘il`Z kRi>@Z`b2Gl8#]iǿ\=eHQDZV\nLMqΕRU1 ydt 눷["CV`É7x؈;^7{+ U´5>0C(Ζ1DR1mx(1#̊; 4hT ! !{O;Zw`uZL*P*5 &Gt''gA׷@C&S5*AFRUZ9tCEU1J)Y"eF_aLm&3Ɖ 7lֳny^9U#=qcslNgJפ{CѼg}~wi|1v>7#ja`n%` Hrۘ˽2P00T%BX(Mađr, kD U"&ΤHtKS#q{t/ݹ7wƍisWM{wGh2jʸQ 'KsBPNQF[+-EQRT|!>c@iABsvA k1`%ot:SJz ly;ǂ/5`FBS "!htF'tr |6O߬Z!~ӯu4B/ [&\n1%Tj87SNnu i#wOn]$aưm&_j4_os[iG0wiBw㾄2¾nS_ToĀ] bylWpuꢺzvН׽9L17kh]Y+</@S8P$bWonf'Dx:hkHB/8)9)6GgvI449iX6%[:#щ$>HUL+`cE," `"jzQ;  n$ 60n`vEMiv&P Jh0|e>,Z b9+R˜ l Zf\y޺'E{O{)n#U9Z0#XpJp)+QPD"rAXeU c%W`T k:PD V!:ǬH#T`ŐEʤBb0lT&|j`9+^lxc&71 b~r;br TOU;OA/O7*߼܀%o^ÑL.Kw > || q֕ĕBlOޮ$Fk: *RjЏ=´.%lpoeY;3pc.([~ H6ܤTZK~_[-p΂ e}*0^:T%Ӳ8PCg[U {ݫq:8>jU9ׂkw|x;-lDtolv[}W,FbMIL2esk AEF̻gZ820Hݑ0H6@\>~JBR߯zo͐Ci(-qǯEyHSԧ5t65߭f u)/a c^ADYgK ؞8LA2=rA-l$H}dH(aWb΂q:u!>NhiEs䉦k۫ã[Ί9ª2Pq*v"]=H9x`]ԡ rH@#'-M s:ZCbe@rd n_YPyi*^ ]\;?4i@+gVHIO01m0Ƣ^<髇٥rXRy RQKxNnXU̢hX^g1C"0^czYԭ,rg{v l˳eoLŒL!;KN[N*bʷL؟ᠺ z/UG<}T9%#;AXh`+'2IC*|ganJ/|a@g_Ya rK/%\y˓Ŕn;CLNRisxSV70QMHQ/0P޾R}HZn˭yd|Ɋp%qw)|.M)f5YӬRh~DG9{RJˇO\yuS U,6S砡{lmN ɏﮦެZ5ᔓ`.Ajm^jZx[VVI6x}OMݐnhfX PJ+V1Q1`Yt nJ^liqu sIs>hRQҙ)>٨cQlCn!+f/W+IUOqI RARS W2X:L27kN.¨[*R8(Ǚ\|>}ÏoӿOỷ>`>͇w?~z+ؗiH^!h~!vH1[]/ƤyT6].g=[f]R(wSMgUoәz :]58`Kq&TL+Ř1D\h9vڔCS`L$M^LQ3ML.co4?u 3`LyϬ D t{Z4FG0l?0* Ao3w+C]y>6ڇtYĢ4i.\09JUQ1` R$=y/$m3jfXLgR}d:SQ۬5%5;4gן]\g)Ι(N;@a–XI_>OkFn5VehHhVa˫vmJ{9H{|w |uy/xwoK&&E% )R7xq?#%'9ùEBqIs񾇗kSYAQכyXz~৫/q6d3ݜ"{IMݘIC ONlIp'ITM[_1[Ics lnS{xԚfߍT\7)41(IsFE J%OIWSgw>~)w:<*Db=XTcrD%d)%K %-U⩗PQ*ҳ/J()Ř@8* *%D\\2V *N 3hh|GQ*,A|0%V$k[#\y3fqnJ2*PbURK0&ryNWEvj]12{0V.[PSa <\Ks& ` z{6vp\=k+צϭhiwC1#YS\何_I*nʶ3,Z 4QS/"T } ,LxpettGZq]z{kYՙTJego^մLew24ٴY3]:ݡ_@<^ӖK(oͼ_e}JFS2Av˔WfU.|B{O.'8 vʜ/<\q2ҹMv:7`᡹+Ͼ+{~E^gi.^(g r*8j>_&b z5htQg?*V_CYiڴUʷ|k5Ad)i&4_^. od&oJP $4ϼ<X4%ÓoN*}Γwa>wB5 z0}C0$樴?5( {Am֥%SG DXG^;$ŊnwœW z>J-=N{C ww,B& !YL:L(V&}Aešd.t+Wtwz>8Utu5e\uw^'@ .*#RDKQkH%9ka֬ MQGmawsh or[_V/VJSBOȍa\RvMzY*!M*R.5^5:&LGE1O(r|A< ND%mD@5tTa47}|%34{P̫(eY^"i{ӍO,_YrVhK>L_HM@>+$3N^^6[G̊BÒ ّ2J Dc%Dʙb!*݌- "n- B5 aN>Áݞ9qt끦h駵W{e <`8'kfE|䛇PgM4T:liq=ݩշVY|Ô]!=gpfϘ=c>cnX=6#3V[oL}`.6TVWzotV =XIeQW(`VIvLk-&ZklU{,BZ"tmo(x/-e? !{/&uF0Χ}89e{Sj,ùbG2 >^Y k\"x u  #1cA\ &2̃wV:JJn[ ҹݖȈӋ<_!@R7}@;9_^V^so8Fo12i&p"|7UDVHSE@4i"D6c#*0e 0(H;(KF_J, ig qhdvZ.a\ƭ2><Vo4HKkSJj bl\ ‘pF0$S$6]q{8ݔ|3d:RTH(SLlTw@Ez']VY(|'/aiyTng1 `Wu{u iy(6+@QeKe2dN ·K|"x˹CDGIq8Q+Õ"EQ&;  ~.z=-*LR\Cr@ ՚|pҼ G7銣^2@hD.XMё skqԱ&HqFH[$ f@e98"L@H Fi*8Z&Tu"epVHUX#nyszy $x\N-Lr3 a?^voQ`Y<Ұ6;rr/.^ Ӗ2'<WHҳ XZyzGV@f@V;ɣ$GNO_B{H]Ms9n+H$U{JRrIU.hǖIx*=`w~_Klj챺_4I]z6jg^{lK0yPBgR>t p?{+>w|gٻg5Le?)5gykjkz-6q|ç{a;\wYԜ^@bNEmnjIDj<QEα4kع Kf=SB`pҕXWJ'>o5։Dn-ܪ'{FWRJl?oZv̶˻qC:p .~]́_~ C?޺9CUƗo sToTy32%]L~#rd2N 5Y`fQSXbR2nP)2M A9vf/ت0ԩEIFuM?U$3$w]KuԨq(Y6pB+˴|~٬sYkjPDS)SԟcPHֻ;NQռ~MM??A87V9߿R7-jC[TU6F4tw{Uwۣ=qԱG6 fGwq~yҷnWd=Ȑc6.@ڞC^S[\6eWm@{ .9z9)]nӏֳI)pZ9r;5z}-l~?.#ܤKvձL= ǿ_ ط[^_RSr79YW^\O@M?h8D_!fV֣T)gS-eER Jwq9A=CF2z2zWKAA 3pDЊmc@tkS.![+e?lnzj{7T WkλLQn<[4o'lEkPfPqC=gKiy\) N&8-(DQpr9nHj-j$X5'():[WKnZKidRR#rh z -Mo Ъci!{05fO͎XZUvYNAj)5ZB OPz!xSmF'1AUR^ރ OJ/@Ydƺh._[kd/0 KW%AL1^zU%+$=r Ʀ7Sb, ɗfVGGjF evɁԒT{f V^BJlg`0`5!gIwp~l#'H_ԓ!FnB #ii 3~s)ы >jyޡ /k$mmLVEӬ Yah5fO<]Giv ݰϔR8+ mAۯ0Ha:x^7dJHY^nf~m{>Jݱ\[j&gU]5+G H>t /Xv!j†dlGmso0|R7|`(єֺ쬤 =Y! Ű~)}_04hCw R℆͙7mֽ:B}7}8/ۇcc3c>_5.ñ&qs ׫|\^zꭋ~J_SSEfهt7̖)Aj/UۅAk$Lyq4|ՂA͠'e^^RM` 3G7=tzAY*X!]jKQzZ+$OȊ瞊N`l]61PEsw p{"SJS``F9 3 =RԃqݍأXKl\Tk$!t𭗬g#'#92jIM|K%Oa6ol!:ouɷF pdsh.ڢcszP.3z/nH>:{/;uo9 ^ZSńֻ\b.DU-w/e걉nQ_xn}ţKlKv=j_~w3.jxlH[3yKwnaۥf}mL+Ni%Ff*WgJ fSQk)qLn_#a^M.JzI@i&Nt]ʾ|_S'_:5 k3/7U7:>=|i{Ѣ+! \ @kF:a.irё FgGzzTHG*N=h[-[^Z#axju6*j :ΠhtׂZ WH^%j։LT#8B٫5fj/M/֒T^A5NLMd=6t;nS^J]MtTo2WT(_By~G}d?5ξ\ `[(M̪}Q9A1lCi P;qKjyn;T{.>Dd1a^0f5+1gח_sֳSSN列\~؇<p͚yZW|Z6ߌѲBtѻr:tAyUEL=JWataű}z? 5XsDzߧ=l Ž-?`uwr #o1KͣkB28oıS8pFkWHrEKKx=v{@r]/O2I?+YSr~OeOXjɬY~;G΀˻b7^7l { +% U秙_yeב?A>Z}_7 ӓN6|UB>e7G\pôϵz2Nѓ޷"#$d#VӋnv5Rp\ R1G*kyg"CS"X?SQI+xطۉ\' g3kvG-Ғߪ1EP/VR) TVPucAi#1^I*Cl}N?>̍$ޖ\Xwӕg[FYwj]Ktٻ߶lxޏMmh/Ȓ+N|{&%٦mbؖ4 漏VUN؍0_77#1@hσ3Ǚ^{i lՀD'#U2@Ȥ^Ed7LDAO#JcG=Ip0y4cct^TݵzZ^[=Pm:arImpf@R{h:XD` եRq(jBuiKgBuSSIv|'ߑtg9>{oFfJ2/B,نr:

EG/TPjܪ 5$5*РzI j!;3;i++Aax[V`[]̑VQ4S.s8"= ]/"C28IKSi\%ќ:ZCbe@r,YҚ3PcrG @hFp€`$521,WX5iF; XȳH_^\C=uEq{v Wܞo|ZicWL&!;KF[Np#X y;{dyPL Ucj\mq3џ3[r Rgj0%]O޳0_*7!IϾ#+Ӡľťq7ɺ\o1yT$'*ry?/ObʚGErW]'ʯFGQAnHQ;-;NVxT?>3(|ܝ|7٥)Q?ؓsW7 N1!tXv;)go5ZCN09]3crO]!,]UX~~ݲаN9/|ppv^-Ջ]2^~Jgݹ7wFoYաq&>3]MCڧѢueb,h\`$(GÛlI\ n-muvmݫF\.f-R;041ƓҏQ6YrH8W/-ЁcEyYFAQ&ŌdȒ'|1x3?J{4\TUEK4"J0VP-FǙ\?ߝ?~)&ݿߝ~;8' eUu4~{$bXݧƤaT]V]u;Kx22N|^wKQpoU{ˊFt-`KO'H0ҼJs;f@Y*@ ԉ0 TWX!luw1*r:B2X0ͅ &0QY mIqM 8Ψܳ/b=V>O1QDAZ0 -WCiuV:U VG;GTF?i|7gןfv!R r$Ϲ#JN\ؐ 61W;d3l 1}䓝,wN( @O ֬Ò!gtbф$3!o4!2no䜲dQ[pnPbҞpS٤@aa<ljcD<)KgJu~; @ۣEKkor*l`UKoٹd.53,4\H46M0^xja!x!zJXK![\n=&uI[[jHiڠfaEcy51m3utsr="skGUGSJ],-ʗ?s:{R 떪)*㩅WsgB6GZ%U1QE^(mJ5u./hf Srz9ҁdQIh)Ms-εw:M/6qh0%hr,/Cv+/2DzҸa>%,zb/'7%z3}n dz6+ҏ8,&9> m3ք$)72b2 #}jDH9SL2d^j.t؝K=|$Zr:tXD;6^>Hp~Bqka盞oodVgu6JX0w[@\Jւ/S[VO){l̞1{|>\Ӭ{m᫓gf))6=b(iy`j!]O.tn[ԝԮݾ@O׃[3끏$7~ -TN-1~7歯VLTQ^"Csaq\OX^δӭtXEt!}4*;dH (/F*z1oGlG_`J\{:dg3^G)zvrBB.yU@xk}0e`b)DxJe`XK*66C3 9V*R$~gdGE3YʹQUς`^@~Z"^eKWwWξzt\⋚V捾Uj7:"WMLt56X61eY- b4vyYa:&VIdV9S*`+8B+BucvT.zwQjI7[[Y:(QeCq)@ rE#{EGe잮tl۽}C)ξL},㜾m~tЮ>}P \iE顸a }wUٻEVK]m~TTS_a 7.X&c3p)呌/;_(_as=<  #15Lj5$Ryp0Fk $cF.<6AK6 dE>oO֢PP+͒irM;L/I+7̰$w@Ar':̣BUDIJT j zlR:AI`d&MDə^">`9,]<7bƑ= F I邎[^ܵ]trs9A}/o/@D]h֚bF-qy??N!èV˹CDGIq8Q+Õ"EQ&gw=.b}lRΡP.j^gzMZq8VpaѾՀ /76S !$ VSxt$Hiau,* R#+$˭qRuپ{8П/eS&D $4Y-A&\Dу R&FaXv˚Xc/l #ML "z-(KǗb`@fE]X}紞zN{LN;h{RA [BV*]niO9Jic(/A T2t`uJJ|J19ɩ_\G) 1K_E D4O^1*KΟEܖ.Gɹ,gxI2*P}$c4*?bPr`6uKৼ -`Ua'&&[r̍>'gri?}ffyp°͏8~0n5_$߶؂?p1?{Wqe /rvU6v\JVФl)[{.B!Ar"ěr$0w=9#a,)/yYP˘#JtN-'q={f.^ԾkKaw3s5wNQs~;[;K|C9-S(m(<2Ff]&k_|c6"/Xchiyv??Tr[jso?ʘTgk;uvz9qu7m;cV\Y5Kfi:V1]kv`q/Ʉ:Yuo5+ӏ{OW wˣp=-oȁ6Χ3=洽#<@ am+Suſ_Jدw<|$XZVWjo8闋7c#ﳋ*o-~o~Ti/;\־~5 );2*=jp|x ms[!',,n1ϯpE9Oz;iͅ{7zI?qГW5|_^^Ͱ3po5W|= ) C9$߶35}PRYgj({>2Lj wqVKUx R|'|dA>g! Z7ܞ{qδMV y9mr{MrjُGsߟzp;z!.CF|[/#/W;w̾z+|h'߁\uҫZ zl|_zbݥ5_"LՕ݋_,$SJ_߂!_1hI%)P$z"ɵ:C%GtxY r/HgݸQsdhe??8@Dz%~ \/%f$FťXvٞl6^j?jMi߼떁g˟Oڽ!'ެ sa~C3uz j>h}T}~ýfSe G:k'kmQ [$po\C~ۣғ>zBTT~{¥yPEU{P5ЩZY{Lp8DWkኣy.Cc+8+7c+D{pQ p*(#VBѯ.Lմ/.Y+7k>Vwv_awWq,2uq~gvd6O|2F^:Z#8Q6ը8gzq7ofYL9/4-&>w G̬;df0v:`4#yؘ㹈Zw,42JaWyp(w W7&Oa+Wqq{6|tךaK5]U3;5_sNya8y\N".//vu d?vS۾s'M'W'n8r&OI룒ea<Ϗ`ɴ|Fc(j vl}`>:|@xXz{~Cev;&s~^.Fp× 5fH@.5{ Ul϶$_Ulڔr_mB2MS&fVG&D{[<Zc1 /}Q}LEiln'M/޿No\?|~<9kuv([AKJh4(b6m^&Cڬin'tҕ M Z2e jft%ERU{^3jrAWڠ9ScFkH'ޞRM̎j!PT#Qi)FN0%n[ƙt8`v !6R^k5[*5k*UM'JmR%Ъm Y-=d6qYxl)*GT8vdH GlbvШBvշ=/^p+׹5ߋ` 9[QG'՛S $5(4_8Rc͖b|0֍B!fڇ?&ChCɸ-Z,@x%C4˳{+ȅF<";F]Q6'&mqAY{|PJh#:і`^"#4*ϏDkgE女<.cY#j̮LVB*%n#ME 0eRj,  #t*|Z(ٸ Z c9*d$,judدao9 oɨLIuؘs(1mTTZ/9s :S@.Za1;9޹W*7L /hPZ%ך4*vkCiw1f@6,2s ,Ŵf(02s6S),:o)mZT5gؗroP4fV$`pjO,AྩT FfVG#~gsU"ši52ƛZui`-Xrn`q0#Q>  okkwjd-Ǭ򪃳*ͨT3$ *3`%jCA@^Atz&dq/` Jud=(tAo) [4L Y3,41G\d4h 3^Xӷlܮ=«skݯ2V:󪂴 +: >g:PhAe2)ڬD0CIVZ?Z7A;?=,cǙq~zg{_arjs6PHy5JE:YJ6<˭6+L0yо ʽ4 S: `-_JQq`!0#R LAPA_f!._GkP*|e @`Mdʫ<5 Z&9+VB |l%C%H*U =+$|/`㍛@x4k \P !W J3^$`@ʐ`)ap;5\ @ϡc!.CAtZDoi]djm:Ҽ t#D0-f"K]52aL̨xv{ ϊڄ4/馠br:Y >FK=jaS_cU,$\|Ք`@-Zo`3ha3d%tZ.ǒ;&  |KPd|? rC\7dpрҼpZ(::(N |~>7*V ˴I [wB,ѬNfŚTCRCHnlP񅳄l=M_7 Ϫ$Z-@ZX6a,,q<Ƞbc* n>2L*go V@ ~b?:T"!-A5@4?d% f0oBQ+{ 4 gLmPC jUl j,u|!>@%Z~"X@ƩK802"BFqBh&^y})[ ,Bb ٬&Ď\red(ҳ&H 7u3ХcU%vh("_ JyVy֋lx_l4ЮF jM[@zׯw> @ԣťs9-2`kV~ӫ?lhnG`=:$:׫q ڠSQ,!$Nκ NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:u`|:WϞ:gA$N὜8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:ש<419u9N`{4NY;uuyNdu8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#Nq<_Xa  Gh:4垽SQgq<$NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:q|;0Gߣq3~q{}}Xwr\=E/ xښ%E-ٞ,ouZݺQ7]MVU*+RWhJ]kQWZ/]]e*9n;TW1L8@ZU\TV]uz3aksfuuè<0*uZuҩ'3BH]euZP{QIxޡ~M*6ajU&WkQWZ}a֪|'n/;w nTpCsaȸ;>0*4׷PT/' wŎ>K 0uj[f LwQ莶 Pj7{Kvtm~[LMtQvǞE~a,TZX3M}}#7]CSfS5j(o E!Iw U8J^~lAY0`i,<ʬՃf Đ;˕GݾZ|{ UZ'Tlӿ ..p %s 1kڣ-:!Y~:kٟ'JPکcq3\u[0<˛ۙ# <_7<}kw,GU5oКEBj~Q A?@<C1g E@|t)V^7U!BKB'AEP\$5NZE8(iFH) %41NDmɠV&\{u+N-ʕᶻi)x?Fx [@}Zdr49ә\ItK= T=͸Q ћ2'=}Slj|{1j,ƌ"'CL(1K֭mBvTn };`#ڝt;d!jww19oADeAh;l#8δ3!EK1;Rq`+p 7u:y{mҒR4a4ۡ:DXGv$*0I)g9ke%V&x2ǣoAwknL4.a~(˝IeOe&U1 qy_KTk*V'$Ht.Dh+'6J kFP> BbV֡V21h 2wVq@Jp7Ȃ}Nn# FbKJV\obbf";3WM=\Q=ĕnXJ[ijh>+#;nq@8K? 1xē)Q;E%7 CU>^ ės- bVt(6ѭk_3̓P>6QۉСxCc:CVJݴd~GqLn4! EXgѠiHޛs'τy+-3ֲ(uV澿KtYJ71w1k^ZuK&W̊p?Hn>@ELQDK*(yF~ȄNH{[I/v6@Zk- gUwtGW q.%ҊSƞȉڲ3~4'f.8#cPt RZ6M͙zOmA7"_SKNܢ 's. \g?u7{.^B\V9["$!upcƵ֞+]`6ЀJA`doe:TTJ7*\ґ1eX'45݌LΞL=AW SXWp{/nFs/}uaM/!80ypXyl?Fɰ\tmEvGy.Tɣo|I+oH6rxF,]w=5MU0M\!(}"D{\FJ$`xE veLla (0@Z" K[V,uASDЊ`Ak\1r/%Ǡ[7kZUӪ#<2%D&Q*鬊IK-mNqH(A1cjjFoӳ~zDpܪW :ILH=.ِkF!H7u'#I2f'tktk#T +GF^.$rt{Uư!l[:rD/An_qRmq0BmbaEa+01AC-e4r*V cк?!-OqRJ5$}0LT*)LL$2& ʊiʂ fbp ŐG 939 JQ\gN VUX`=-]/ z]wShZmt%xOG;I61HN `Rpx|8u(nub#:b]YҍqZ'T ֜57X2lv;y^@x.e˝qASM*3$s=zRWYzSnM3YA ]".7Nh/frKx{R]iFOz}a3HԘU\9է5KuI$RRJ F"&@\ l¹p¿Qp#c]=Hl擮-&|QS(Xe%- D8e8FŖXtТŵh9ǽz5i 4Ϗك|-ݿ:-=VGb\uC*7 ^E0}s>q2#W:j9._EYc2I4Pq~93,pçτ %h y5!0z`msjU lXpo6͜w}T>}u?w>qDMv'kd/* ^?v= XY:v5|vpc`2: @m:/z 4?YCQHqۀ#Rr5$NHcK$&Jk?[+HmE Y ]ŸW G݇;msG` %MNwȊٙOm53p&T2 9XHAPGd*i.f.upN9%g#ZeJ~5 iSr]/-c[lgTXqB1u&<$xc@Y5Q8k7`+H%>ϔ;ǟ`"FYeH23LeK1%T `+p 7gUR[hyò?}f4\/;Wn\9[I+k,s^r_uȀ}<Z#h]~nۻ;:ٝu4$5}+z^v_y-oT6cV>^TWom4cO[Ci{I a1zj #W^ځ!ך#[WCu77Tx%Yʵ)%хF{ Jr+/*q+ u9W^DW^Xo&-rn9{8O`1'mdhzrE)0Ky#:P,A)83;&V$ WhFM|p)݋eSE_Zҫ(6KGk[ O07xާx~m^,ѓ0 [0!9xE+m!,x'-}0;o uJ)RCg0}_uEJM)bl]ze(h.0"+3$ cBdcT9V J>/|*b )N6X ²F$V bqp{ c'Nu֐6: BC Dx(C2uLiE]FvKʵѥ9j9aԑD@cy*R1" EP0;(ᆇ[w[`cc؄₋ Mjac Fz*x|z|M'PZsb91f\Sal \IPI7Y1%#,4*w?8VۗNdDOދ0Ⱥ_C p}G5nJoڸ4y.u&ODL'D@_xSH<اlM~|(v>w@QM]ܐOhyqA<0bN/]L _w'÷5dF+@4AG9{RJI]u+Xbҵҽ zS_9矲[}ݝg7 !7ŅW胳Ya0r2|mubuqY-ӱq^"&ӹ84C%+J;/lI.o-]-k,oF. #F1n-B6[W=c3|긓Z]WOgr;SnϥG1ʺ1KA|nyh‡C ;&XQ<(~}?eq;4,E(l+;>d0FUaf~FX ` X.:̵/%_>>ë?~1Qg?x w0/:D|co"Dl4՛ƤqT*M:{= ^gv'zҮ(epղR@8:khkhhg%wcJW<48b́_?6E\/̜|s+"j^,?/v1wh S3+5sBA1#ª+0|LF!ˉL* OlR6/m8In2u0Y@?eG|p,bN/tˣw0咫mi Lܺ?M6G~i3iw8z&? .$XdWJ |KIbIGe|YHxl0Pz;GwYdX~{"D%g@Nd#04{W~T| 'ݓTLYM_ϵTd)-eiPIQgPCJRtb 0'lobI\%-7}r 6IU= Gp3L5Uv .U7)#WJ`U}$-WIJްC+M0U R逫n,*,rGkpAwSLp)kI-acr<87~% (>i}";JbxnZd}'oƵ\X{1%-G\Ґ3$dn9ќ8&i_dGL{h:a CсUr>8"2'hxLbM rg<b-Zt;n;{,Ŗ:?/WieN\q4W~Ë/e6x,v`iDVj3X X"NРBu0!/ lX12x[fYbc }")ɼ2u?*ýEDJSQa [8{M : ҊJ*MhT0.֮/d?ˮ^Ec6'R{?R?-cRAEgse50F˼{_d'zz5q!LgN3k݀ONO+4$.Ҧڭ։Rˇ7y$b&xS@ҟnPt"ouvk_P5MohܸuӺ;:: eOB,8֠̊w>צlFRFgƷeYWfSa n-eCE ~VdtPPQH-H^FPT1F QqC"ʕQBkfc(_bv8*-<;eL^6$K߈3*u U򈚨"I*ESř}! >^/b {?%cDLc4R硣wZL%7+>F4u!A^8HK[!\.hW9>ͺ^̉zt,|t,S\j\v_˺c7MpvR]wE7YUQ X0`.tK,oU.fLs&`Φ$GBY~vQ֡e0xF@>GS(8tow^ n]BPO9I`& ehoRʓds4T7IO&IUzuel[WI-x L+cH "H]YI)`:]9nS ))Oz3 N2񪭀2vՊVVKJxUBW4ФR1"snJֈu"IYDB`Xt* )Ctu]i˜dO+MrvW Br)ƳPF*$PqϺQ"꾆跣!nPZts[JVuF%] PWRb`KE4B\M7'+&jR#xtXt,nNL2+b1`D" *2IWgѕі0 p)2,x]!IWԕ\҇g77۪X138I>[nofgv}rSeG/8cԹ4 ѱ+3YQn?Lv0&eV~XjVճX_o}^ҿfH;cA;ŋ17B櫫Ƈ>8&9tm;؛߾0v9dOLړ>s]6Yr55kɅJR6Tz~@גkse׮$g%b`,6=m~ DX*U)w:9AUhpx1*dN4Bi1tC}*V[iG ~M1LJbS,?^{JP5vT ');ۼGWOwO/vOOZF їAqըXL!쵛BjHF~t=]F2O/k']G k &\c!Ě \Ѭv>[*4ї_VU>\7:X\f&y_\R^lm=*Ft\_V7oXN"ߺ墤* }lm?No{xG7/P`gGn?[}/d鮥Кsu5^+Լ._i2=_ Hœ燷LpSt%g0"NꆛGNBFsߎڟfض 0'4Oˣ2OHȧ35Lz6!ZJu^ l8fv''~%oCZ?OK9`m0RjD9n;p_.%E9YU1"#JJtIIK Kэ6jwā6_9U|*?#mj+UϱZ;\hOsZ;J ERvhSc$ $ qu4)kkSZ-nFR[47s%54qyZwvn׵lyLʊcSM^]r6Kylu4ayHyaR6w:yg =BO FW H,BZaBR!JX%HW, FW{xZRst5@]Ik 'B\p0 )Mpw3Ss7 +H)Rue aV<+]! h ^WHY[W*j8StD+5&]m}9@uIWoHWa uw]õ=kv d-tœMzJ 2 XIW=7=^WHduŨJF+F9fv+DW@KiBJn+xɹf7o81],;8b^鶛h)cMk2[Ý^ʤJž]2myvpJA# EpձH+oJBA~:"]!pY'+"t]!IWԕdBqX3WD+U7!IWԕbʐ*i4B\cSu'] PWYb ̻3 h5 )9M g\tB3 qEW@+Z%KȆ+˥1EW+ VvJ4Lz1!fo]U=//ю3؊D ]C ƈHWT4Bܾ跣&t]%,jbBj#אh+:t]!%IWCԕZؠ aT7cM0bٝU6sT e<͠j!Ђ<<0qHyTF!ehަ0,aIxtǢ+JR$] QWR2ccjDhtƆ+$t5])d  qy4BZ|%R4brc" a*6J뤫!(hLAքF+{Iv+^ݕn^߿rp}>/tLб&> KH iƆYx/gڽV},V]_y^s%_/;wz+U9f{蒏3?}=2 ?ƕΕ˲:1,+Un8@,=2#ْ!ajw3f3L}*%DNzVSmZAn|IYruL=ZTmDz1֪?XieZ8?Oht7_U"m[\͙gw7[;ws;§a4)ƏIz\L5 DgN_~|^n \]_W C^3\c_?[\B3qc8n?nԓOr^dJˉ#(3x=)44˩'JE  J/vI*)NB&K&s9SA 9ז#<-acOv' h;J|[lk lƾpx3 A7d2T\(K\^f:3 W|8x= ZpbޖfNQաF%D?ОY#e<UEy]6QAF36y&Ec'ƺp̕'ZҢ%7%@4F%Jq(m4f,͙t'օg~q:orlrl]'rͻ&T ejBi=7s?@-a!C_:u7QǏ/v޿nF܂-om4WMHv?FMN`s<-M~x<Ww>p| ezBj/)Y≊)լ[vn/%=pfI=T=Aag.,b,1cJ1(\OF= \ߓ ɦףxNo#SU,(Q%9*τ4ƻi}Aɽʌm S:;mTK2ŔBYihVrE WN& 0YǍ;e^ GiZB#z Y4\? ڶcNz 饤R6oseҒ 4B;UjW,ri-Eؙ;u*BlW`a8zaGfRigʉ](UB,/ry[Aae,a#gyN)m;UVٌeN).S`R0qK׎W ,.rYɤcYuPVyA2ӗ}K/;|~sh/V-  ɠv\PD/s^:Cv9E,D^B@.5rңC61rjGPsY\`\KK4ej0^@ٻ޶dW 㾏lͅ,O[LiDɎbViQm'3"[f_UWU+BTذqԝ!Gh~wvP :w}`QƲŖ;gAREPf!l!( Ҙ3)e<5OcVRu/$ h|dDgwKZP o?vdA#hWMIGv o 1cjj;\ZPH(Q9NZhG q4E}pg;hjlS9k)&+] ٬@C+oݤLPhcv;/ն(U{U~⥉hw4@)0a˄0u`#dPEL K9G$h} KP,Sc2䨅z43 4 ? ZhJS;oV:+"HGPĩR 0r ԡB9$C Ƒ`GInAE!@12 J9wu~ĬiLI(B]~HA 4X#0 PAy#rYRғt'1m0Ƣ^<F[2xkPQKxNk} YKk,fHYfk,Y/E٪mՀ1f~Y_L[I>fW{]0$d'iC44 E&oH0uh cӥ⏹/kF gv6ډLҔf$LEUp?0 FRľظ7ɻ\.3y1T:ӓ>qszن,b?O>xϊ9wk@RT@e|V qd|I}TRy>X{j7I^1܃Rh:,_Z=W@bh= &K_?fkԴٹj ׳isݪe2r̙2(pprZ-˵]q^?͒)NޝbPeUi#>Y0}-ZY%B>J~ UL@iT /,Ԍ98Lw<[aQ =!Fm{tJr28AsIs1n4`b6юklT/;Ё[Ȋ[ׅfP٩){`JD\Ͽ&g2sM"J0QәqPԷ37?ߦ>cL񻿿;kГ2 ɫm y e;L߿i1jho>4UlUO,#渌םJ(Sm{Uߦ R`rti-f2>!Hf0]D7bX(ƌ$NA1kS_}mq3g"!$Cd5ckݛ48w/1Fc71Ɣ̊@C5C1#f>&#VUFm6s29DayhP}NoE,JcBx E,Oƀ5HAt\GC<ΨC{d1I- O}lf)DέkvjO?O810 R*3!Qn9wD ąۚ09|h=ƒw8g$hJ\ʍs&$wmbr Cͣ?ȓ-P.P LM(J2R0nF#"F)K0O&*6.9ùEBqI{VM'@rXVoOҹ͆rِ5>8 E6vI]2B lI6'aԨm~7wZ M6/ݜǭFѡӬrNy}eHKj%3,4.|Hm`Jn|3y`>zW s%UyZ (r-FE:j>|{W7J$L/Wbfv'D媘OkKaf@*7^s{2ȐFYL+BYRpi5>^&u#>ڋ[͐lavQ#*Yc%:`cZ:g9&Zq`*rrϭQ9v7aJ=pw5>yzO"꜅Tk} >Y! 2Du,RJ=3|$ İ1Pނ%VD/+.JRt@?rI [Jץmq/sg '?>R"uTCU(ԃPJH#\ӳײ=!SI ߂M|Zp5"z o_XR~+Ma;Mr} <+Tdnӕlg;w%Kغ fF@VҵkY@ʫ75.{UhBeB' .(wE:aϦ:`ko[O?"/嫹{K($'F-Ms}vM՝i: 5lRl|d~! :: 2h>Aڱ|^g@h6YL59)C2߷~ +ƻ/5:_=Jy $'(89`L3i%ҹp(Nsҽ!}6(!%\<ɫ̸}Cc񉕗cYh0NNzbA,}X? mVs9 "?`$wX*(@$tWA"L1ɐR{*JûpNl{_x vҥiwc|{tx:ow}գcnX=3V; `hy]1ګ TuJ6io0Ɏm2.theR1I x/mՕo8#)򲜆3n[3Xs,L Txz)4cu{}njuޡYQ^z=ƓybufSr 6`в@M{qMgǼM)NDUI=厁ƴ-TZQi]}}fuQLG2w<U;cގ'F__>|Ͷ"֢;uM.Wc!D+-+)BΥ+T>Iahd@QLr,U92j"Ya}X3ވ[ 7e~y؏baWУQC]Iٹ.zڏ^S&z5vnj2*V黮 sR \ԋR8 <~4**g \,sE^hEm̎ERKz+!-t+Kw\O42+l;E5 쌡8a`9JyI{EGewtl۳U{ݣr/k_+vUe@k)?:e]44LL2,1xsJ u8ׁQHFW=L1<{ɄBQe (`)<8hQ`W1#B+;'ӎX2O:8x??Nx[ʧ련Gۑ/(x6Wn| ̰V .$g|Xa prV%ymSE@4iV{zʂdR$L(l3 +BY|%n$ v yI]R;e my5y8xĔPϕiaqԖx>"R) XL#@F]Xݣp?!kF!FZH\+T"Uc07cJ#> F I邎[Xo|{tr9 ˺;-hZ6[6+JM(rS Tt&f9w()@!jeRDȼ0!@7q9"wEuT,m߶D7Zeò}GHe \:ǣ#A 0N cQ)LY9 YnP ۳}F( 0!!j PPP։ Y)#uVa0s ygc6Mpʑ[kbdZdʼÒR9#2Ȣ.x?sZi=Ӟ4)U-DԵVUneIFJichI=U[}(P #Sfrny-/yQ c9)3Mc\%fV1ȂSR(|ʧ=}jϵ=tln//$b<-G<-zBÄ5z:ǧTOf)Tx <<^#<WIGxT0IFSe#Ry,v9V O zs#s&<]'zZkz᛭sumW_\tpQ:E>&vL%P{󥛗 "?O$10y3tṣs:xxܽ'ޗUk//1K_AJ7ď+ -m:%' 6~|ML[\@\&#`gADIr̀ڭw](}e9,ז,~qۇX֣B"iBr2A" iX4B8`%nKIB`㰨5 ̻TQwq>+$Ewq6*KnE|9 jk-K${͟HӚѨ%XZ3bHV+.H;AZe2q/3ο牱f5{y1 6SMRZ\NRtyMVլJk L'jjjI߮zag/l[յڦ1eKsIp/ЊM;mSƽLZۇnn}zh'g`ɝ[./; ,wșZvl9߻6[FwDd;^.fkt"Ŭ= +gb:n96~gsx2?hgކ2a tu 7-5t\LUՈ1ImeS&Ԣ4)|c;/,΃ @zRʜXK|/]q>B}/<án\\4E;_19s$ڪ. $]bɖ2""M\,_Uc$x^/>nRddֶ3:rm伩!HT8"}ܾO7ͫ5>ObQ` B[2 5Qӿa< Dz۾5"f~mLl9lg|>i>?~ V"RH6x␚Ӎv-ў={?Y:%z@- =/RgW/ߝ~Q\vln/2*{>=am_ɺmbXTY\,S*6(_Rr0̥`Qͨ6AuZӟ.h'˃`OUuަ.iJ̵ B I|m*AvynI Ȧlά;~^gml1epKFTCR%TdՋHXB&isvmdX{RAn!1JҨDWzi*9Sf7/`8i6i7iLͷq잝Ɗ[Ϲcv~?Y4-]?v!CzճWW{|MYks6g7:6>o{zŏuţCAw<7.#ZdM_]qyuem`.=eLM.]\i7<ZBڬS;];;EvhvǖvRz.t5^*|oG@“R'9^Q_xs7E?~ opx|ԧ1lL]oZԛ\xq}L(0qٿ,>LZq/ҫBYye7z(!hgઋMܧ.WW]pJ@nã>3:'񸤓׋9Dv.~pr\G3?FLo:qn^ ߎ`2N1CϘ۲&V5b}jOOO|JχV#ǪoK)L~v'eH#)4fNOGئV 67͵_:77H%<l%+4K+fi^rJp>v p!D]ZQir^ \SCpOcr*Z:vRzWW6 \A\R+pil}0)ޯFr(]҇X)ovvrrz/_?͍sX."_*|86ٕ LؔmQe5"3$q4*ZoKZ,Uo~TݚƤA[ѱI"Niz˜61&ǡƠ b%Gzm&F%dN9uj&VͽUMم譤BmM= xWyRJmϣ^k7P0ZJ{2m&h 'lP=JA/0vQl^~pؼ6k_ZwiR1o}!C(ّ),$&U` u4kYf a'`ƤLEa<~VQQ<]|9^8?Ύ;ӳXZzފ~9\4Gk&hV!09*bf>BJY8繯&xaM a~w#O!E>)=8U@W961_CF=C۶mO̍dyu2G,vLQ> fsmHɷsaFkKT*nhk2)LZeJ!֤I2 RKI 'I9iIK6L& 9Hpzrt-@sp1i鷮m0c|pKKjͿ^/{#I_ŧ忎/ r,fiD~*ûg$*..N$8ITR6yJKU5IgĺYWUa5G$FĹUKD}F[Ez BO1ϫ6)5_\{3yiYvR=JX(PĘͮ! LARr9ֆEגY5%1Jusd+&/  W$b,Zdn!!NtDk*ЧWjm`?jZ}-eq{ad]f6##qږqfƞ [4*guioiQVJ$LPt66ں-ֳOԳ'y؝[!l.c1u Ig<*h mQu-kH:/}V/ξ4Ay˝-b݋e%wk0O/7ZXXrgh8?^!N(ſc +HḭÑ5_v z}[ޢkіDs=_-8|x5R:`̷ؠ ѨYnO2}Fs|ji8ͬr <˱b0'?7"Fv71{>*i]+cX%!ǜW9FJf$I.κ0Ι߿9O&xYysDZXL0w+! '-9ouT(Gko6fLZ-"oyg9[3woq_ąKIa%8`9O\S"nT>EbGMif62k秪\7L!%L:U.VT֖]Mm0vF*O<)McK5ɳP'>}>U,DDQ CO5LGXfe]ͤ*7)AiQ3=ggs$I9j<*dV:$gK2&! 0܏aH1`#?l6E*F ^5H$a!!V6EWs Jz)Zi pT1Cf) ױ[g1 f>?BZux^w %9&U9Xdl?\q\x\N64 p!"&'S8d5 я$B04a֕ pXʦ).'p"͇9RLInKJ%Á]ҥ1iRdW2\ %"A0).EI'xt`9+ Osf7* Z&lc,`{ָQy_oL-4jAQRXKŚ]M^I*5QqYw39hƄN؃N:æ8TX5)b1)0rDMA &V n!sppQ~R"09ࡥRLTmG 7uXV6kt)梋-"21NS&I646T51LQw-j&° ΪtGrXA Ц I=kEqM` 0f-(BiăJIeT15<g{{=m{1t1E (6 ׁ9X/xz,r=Iت Mƒ1oO*FaيcxIv`'{H"52 Z!ܮÅl8q=3XZgk3/}6)뵝`]}885!5]un8_$i\XK)O*,LSIb`2"eH:ZA"gJb#=.*=)GF "k{ /ĨiTk85vjo3⸰ fPmkJ+GbeK^}M8HE#$r_*@X8Kp-SᐳBN޵6rcٿ"S",!ӓL$tvִ,=,=Vɲ~ib Tp @,nlC,yY]r-s` dJxBI%0@DU? D3a`[o,СF7%ZDn62eфKr}O%H%ZJ-D6D;$9 üy T ^!~L6ZYT $Bp5VzaRKG&r $ܚ!`vNqߦjʶ,vy@M-#ŲN[wwu:R֘-+> @qS"m3+ d}g$πZs0r޴1(XY;%j$e" Ϧ`H#B4,ӦM@t18^U4Bw0ut e,@1~ӖIKw 6Vt%j{&۰N",a .얳]ot(.#d BE4xDEsE [-O BJ'_/8`5lXŏ䱪;T 'E1-Ɛ3 kx`y`BMu jDxӠsOo9_; M(qY"UXX[@>0D(yTͭf5e7ysi2Wqq'1渏Q4Dkiںu+ {8O?,FFX2~ kcuBsĻmY v- dP:d"B1 KD[2Q"o ~(jΘr¶>e7!d4V"ox}S\pedQ$U.Ø|'8&aN-5[$neM2gJ9cI2Eb% tP5`O>3&BalCd(=@`0g9͠a!' ,*Os501E"oD E-HCZ$׶!o=@Ɣ1ih.&oղ8 )aDi[RD5JGx=k XbkV6\p]y.9Rۭԑ!*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uR*uRV59U@jz>:nMN> ORlU꼄J^=rNny %0 qmBUΪ7w[8!?Szsy[l7n.,3ΩPq'E%֕ITYD5aj:L'.Ǘ7u6͟Nt~Y4v[Jl'S~cd;BxHe6?뺧Y-ۊZ.G˗z7@׊}KdXgmFwB;UGmu`?f qk j5;;l_}d^''^d9_ene/Yǀ^d U'ÂO5MN@LA I"*m,<)K.G&6 gqKYv߻u*d8ס-O";HZ0؊C|j\s.X}kb*KP" U@Lp]q#&R/{rz#KFA |֋o+l~?ΧC,N.yϣ](ޙfqS6:ktʻ61tyXjlԚ6'Z{e7?vA$&w[i4l=Y깢/^hK&:@+x. lOc-xtR3e]k1 xa,c.Ȁ<}b/(z)] ʽdf (ejkJLsLǘ )A- ZwxT `HZ5BWyǖ)v7?F;i޴ 1|2\/wbGc}]g⒰ϿkUZSwbw9&2S>`ۅ)N#iyZ]EYO>L^O1o'XM)ֈ}kjvM69hp r~9W fq(|£ <_<,aXUv~ vVk|X{ZoͦdWkԵd>G][hRCZ-h0I|^A~ʾAfVt ⯗g~_Z1RvM9`cQqW?`&)p6!y8ay?8_޼_!O;yOOeΫ)SL@ &٭ox[~[7Zsk] ~},=qBr? ws}gUi\ Bq!}~Wdp~ 9ƪ&) DM\7hĭU N?X&Msx#rw93S~L.=%O4ؿ`*VJ݌VGu0 s!;f.r&oQ5oZb&'\-VdRd+Z&a.mDc5@ gӅ5w!"SUyjD+tJV2o(C1G6?Hfw[żtqUx |h>}3q 6O/|ǙPoH1Z :ִ4g_gU^)fdr5NnmQm/>?.q>۪Wj\usFT /95AZ9*seUKxlW`Cv&D\2'%8Nk%7PM*c!]B}3E]^`\&tqWn[򶺚y ;ʉ yLr$v E~r-:kKy~Ń?>Kc[a=0aķ k W>>LI&}=U?\&$|}Q~x_,~w ^r= ZP^1]}so#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#K}gSߵx;qN*J~_& ~DfAWraay>li򹰥6O- T\PKv'Oujոz Gb@{}r4Q5\n^TN%7Y~r~Y6YZ}: $\[%o-}ԩge}j`qut{_)>aveyݭ W34[3?^Wx?)Ȯoz+iN_%Xnhj,)+{fO =!I3h&)&))& ѫGV,RVg|1ޫɮIʥ ¼d!o[e^^e69YdveB?j{l$eH ˟= k̇39g{f)<),-7ƳƼ~=J.ë yJ~JTmJ()*R]V%ÕVNi_A'Ml//I, Po**ozݡ>D=m_-qj>* vxV15CMVC՚aNX8!ZW8 `LVޕ$"0X,f`ݗyi˦D6Ir/Od(RTl٪*""2505n '@5jcؘ MvwmOcBl5Ӎs]iT\0us0K+Hn?o'N=}l8 WRH`چk?#uyM&W1@|u3 g] }w_oo_Kcw_fF{v&L3}&˳Oߧʑ _Kj𡞅vzb[P?RBkVI[ p2FL 0/teù#6dl\g*^WUG%j)y NGKykdWt直wY9kKFx-'ӑu#G 8'VyHmL@LܭN@LJX~Z6E'Wg~}=!2$>a| M)wAPBK Qh"XM(Q¢I2._ϗZ:S*ˉ]E׮4sx7>񊶎{)Gz(y_݀Fcu:%3sc|v:1^,kirkm{6c25AN_HMzs! O$xJ6 Q2fZO,mlr `|^ΣͩCKCt ~S7N oH`[PBЕU7yJW+6^Z9Kzq]Nrx3:Ha=Hd : IGJ&2 ;K+Pڽ@Ľ#ɅKno"XcaH Ac pձk쒟G&suCW2ku9gn /`G_痍VXLe4K0EnGo'[U6<΃\{~S+@(@^ S>C.}L_;[Og0-?Y6[x"q8qՁ>^2;~vAcw6^h5?{5/ӻd ߮fQ_wcMk[F!N+ƫûPw~C_u )z }]g>D3:FF(4oC F'B s01=2!p^b?rLja;>RL'Sk\fk n8gnE͒?#ǒzx HSQ9Qm]2 A0a"1$DyJjK+p83 DA?X y p-2& (6c . QG{S{Y3kңeyu?>{ 3o?ݴ-VEdG Y9ޕ`LTD)NrM1ERcuO&.NW vbz}%zV;iNRP=h|{Χs;Qb{^c]~oU>|:N!W *r!T) :>JKL^{NLL" fUvӨp8M2]>z켻m`c)#Ph[pB]:'DhCk\Ac 11U5wFgX3gv]B䮱%g-Z ϽF%`h4wO%$=JҜEά-+GilU&7\%<ג `:W8OxnhuV& #OXU#(ͷ۸gr뎋~r80|C3rcҁ&aߐ/8=«}|u3zq:o'[np\p6I4OAQ ~󳛽qԉ(QdyՉϨk_=Ɗ~\ݢ5~\Ïkq ?>ȵUם3]Ïkq ?5~\Ïk0%BjU#~\Ïkq ?5~\Ïk&RVv/.9&'|#ƀ4!%#c^-ĎN[8ϧ^cys 6X{hNcՒؼ'*}ȩ7)HetG*YXvA9)QR`S`> $X֞ n9 D3Xv v@ܡ)pp,}'9M8svKhz>w|q?n֌vl-lE7wIU'981yTv I;N>P `iiXRA@iaEIUQusDI{@XSʄR^@ BYr!yA{#*J/pKz9Xg7;$vֈu6!$d?,j`f## F9Q.50ӕLecVhX 4&I\XlgJ$It\ L$NcPu a y+Vx};P;49^g>49;T;TVpPŬe.{h1q.z'5pvIг{gdvɆPݝR*U3k\b;D &IoFhΘPԩj8 .)-> N;E}A"WNVPzYajlr$Cݸ^|.( %yHaoA!~ QdW)J9ȅH3͕ipJ:o+bVd!X6]c@(v+I~x;h!nפ.O?:*U҇ BHXJ@ICQ)7zBTLϵN1!.q]V&% fa*6?$+T=UMʎzV-Bf;.tS#)bZYi*oCN!ax _xEv2P, $h8Z,h? ':QG0-z m^^OTo_"&"[C`1 S/CBR&$uZ!I9 Rr(b{`wY[2o$q0DCJPXb8I\v' ԣyv264:+n"bwEjq(HD#R;ݓ:t5Z(d$Z8rT,0W1ɍ1Ⱥ5DY(F^D)rar7U5Q8 PR_c/Dm\#_@+gVHIO01m0ƢN<tF[2xkPQKxNT̢hX^g1C"0^c:YԮ,VMkpi7kp0l:Y0yl$_Wx2 Irͤ?ΦlŪߓ'^1~UgoT :ޟ~˙BXh)M'_IgKX>s)j$5J[r쑮;gSL-F ޻ILٺQľd+n ."Q>|sG41~ E% Qz<`pr( ֵ|UX_/CL6YwUvnJU?=B(K9{RJcU_Z=W@bh= Fs_sοdKTΪمr SumyfzX3N9M| bwrZ-]p^ O955+K;I6Κ!h8"|a0,`ZU1{gc l:*AG>dӨMϚ+Z;”G\~>5lv_lX鱶S=ﴴC{n&+n^Q(sg\ 1Kj*g~ h8:+ܬWdb\Y F7<JgAu;sS`?ǿ?>Dۛo='x/㐼:.a׆7wCSvZ0.m>rø_qg3_v*ijv&nQ)09hhm4Aq-xuX0ƌ$NAL1kS_Mq3g"!$FId5ci> cg4?v 3`LyϬ D tQ4FG0l?0*3jCؚO̭dU ^!B!:e(a -L`FUh)hpuyh,&3<_Ye㩋m^Lh\K| L)@/#(\*dT*t95y}5I=\Kv .'aSdlǻttrNg6K5YPV04JfWJJ4Y`AGUu,\(xBo)L)޵cWt9Fm?E-SOܛ @wY-u}.߮670}Qӻyy3E%`͇wYuZT"ȸ ڮZ_e@@TGË^̨_{W/ ᭧bk۸ Kt΄Du%r'.lH+ pmƍ'vfڿcOv},'L1P(~&`kmAn30ZdhBQTr777rNYya3[$֛S,DuV,duvhkr!idch%gJU-;A$໫@ӫ? ,}ݦ”m,[Eyvj.*p҅A\33RX{7bJòG C՛RζenA\4\i!G^ Ji*Ev`E֫nfij0&_j{\ѻo˸%/}'C D{"p{dXC}&[:ԎLZk^ JZo?2]mُΫ8i/ c}h(уș-">FaNxةTCS)Wg:O x3:gZ%OVH G K1@ 91n `zoыŊ0d4U^9.ݷ{R ֫J)*A\;A;pHuUu ut=(0:( i(uU..[@(89m3i%ҹ;Ns!ӗLxV4<{+KIW%oqhdkӗ@Ƨ>Xo"2|"?$%FFj+K^!K3$CFKUb+ oq{_W'tOɧ߳ݒv+aM3+=Bp/2̂,7εcU6@20w[>n}o%kA[| 3y=6v|y|Y3pfǘc>cX63V`(iyY15ګ TIH {vaZL%mۆK<]7Ztb6~ǝ{LRnIu|[g?6*-HJ"*׿16V󜦑 Sy X0Z,;4[\n'wQzY7ܠQVhXvn ݘ1"Ǿ0T]ܘvڅJ :M tҡۢT*Cp♧1tgnaے ˇsRYWa%SHaƁ&k@ PU`o;n .5zg_5M;**h|1+IޯPu_-?AȹRyG+?i!<,(Ir*BFM$ T.K}+qs//Q,L tQ"z]Qh۸],:ڍ^U&zvLgj75bHp-<VlmuI~ 6&VIdV9S*`+8B+BugcTvwĽk i[z Q\a5)ʨag Eā#Qbt$cHR+ >*ەcdw,Mn͝\ݗ#F_ zo[0.(ڥוmB ~PTec?26[&c3p)呌+y՟W(_b?%x88  #1 &5$RypFk $cF6\{s+ON Eiး{t.9Zqo 0.EQ,Gg>>~W`-䌏Z:̣B*$O%vsh@5jVձshNF. 0(HQ:*[ /vV6n ?b!Tr}7BN7_P9prZ,2/2AiN #ܷ6f%Ɠ$Xev]qUqh^AnewDz#sqLE@TpdL(\D:F`a68Ϫ㱎Ǟ@7)GZo&EjZ*KK10 3Ȣ6x qZiӞ4)Ԟ"jUU>_M/7Q^'>O :($vؘ&L`_skUk[J Hhr(51O&>џ}Ծꋓ6SPZCgt +8]i1MogqD1rFS(|ytN;z޻b-RRe=u T?HsPg0tYq_`Ӗ2'<Wl$FY,pF=%NyI ه'_a9,r|n anQXwSwZwmq#IW?7y o b_X4a˖Ujl{TT*.n,X,LF8 9#Ta4]&[Xw4ĔRc̣rV݇b{RP0޻-+]oh=/68 -e!2 0ȿeܢ?bU?/C94 CJW#26@ř.,;XoBHa BAcys%pHW} m lPYg%S9Qq (T.nbIMM9v<NA)XN%2gѰ|g<&`g=-4h}{򳇤~V9<>㓺N^}qČ4"d!KD(IX{<"vp}/zI\*X~6LFE Eݽo&{r[:_ ENYD!R[ƲɜF;%GrO ~̼䛢}CUtlVҀ7Dg]YV.0 V<5O-9K ~Q'|^ΣtbW:u^_>;lw_#W\qFu%͂ѝ:v$BDy9*L||խԋoxqIK1Qte-E8JckdO~|}d60Z\DoOmlSzѓޟjxlMl-uetS??>NQwuGP3Sh'wE&YDf*u^j1HVGw?~=]z/ق%%>{lE'%sAuT5Ō='#ސJCqDIRH=*MDbh@J:'TIw>ɇ[Y[C/LA Y="#Y6Gp T1&LD0$4d֙K:+&xY71|]1Jׄ:"Ht{Qbb:fټDW7UmMbxg߸te%Z޶ruݏ Wͱ<\Z|z3b@) '2s6:N-!@3dkIMI1;Z`=}E+,WaG^T EPE5"J:-*̫*{tN^FF!ObV+Zc E):FB I,JCΊ DD5" ы|NxA|WƗ =6ݎvyQ)6H9~eGùO!w,l+|V cXӽǻϽ{~7|S?{Gb_ tmvtQygKxYzsz nd-7[.X`]ΜXW 3һ;N^#鲸mO:0Y%ú(jE#9e;9]N*l$1ˆ?s+mv|6Q,E<2i Ӈ s:krRO\ݮD2\|hRKIۡ4s"^P(NeR:[jUa^Gw-&0Vf^qS!f%6AH2 bkA"%hMEyE `kOl;$'nGnACmݳЎưWwU~rOqp0Â9"k BC73+D%]e 3 KBXT&(R$^}GK"FS0Uʘ #T:RD5.9 v^n^l8[z ܴ4ImNJ_HsΞߤt@އ8=I.=wM`@IJKIγ' "99:qB%c)LIT-~ ڕN\K 0KٛK(vfj5dIiViIfEgStH6IF`HSъq]@5Zg߄zrqF 9{˷uC$GI(yrKZZ UWf:Iڐ:AFukՈuGOL,;:me}(l/(L % :;Mݲ" lv폳Rvy\+#KA_`#ÑhߛKyΪƗO!GLDC7!0O!OW ̱ 8+Jy>\D6VYΒh*sf7$Ʀrgi;' ݔEnc,hXK3N:hyyz?| II'8bFHV `AQ,dx2EP r1xђ1 &Nц yͬt==:Pp}. Wy>跃{@RdRaUWtE)|/|Qf,XF ̛n#7/VrY;tJYg}}qo#}R8gwF:\͚C3u9J./>} 蟍!"{]`# ".(Au(4'mn"1C|A%/J$ V6Ry פ%DD1h2A^jovC*Ѽﺽs`f{,Φmm0S*%,4O|h@2Ja-N*cu^%r)Ez4z(s2*D#7Wftnv7ġ8hK|oU|genO%(m &z(5'lvfS1AKZX j%4fnE]O&7gw5)k| \ Cvdڡzw*j8vhuvhohv%$=4%gYU1.hGRSbǘ:4GRܱbeZ{!כ>(p<)=w³xC{ wFu4??u[uu5y5=瑖n:;щPf|x# |sSh|ERQ"́!IX2ؠIؘ).y`l1l v)$b7UC!^[!TPYOPm:^0r\|65FX6֤3iV- gd!CړI$J0A|Yhp#ַPdcطm^qxdܡޡrB|X4{}NɕN%]mlHMk1 =#{ fŚT탪ƠJI JZ9ONH1{"gG[C#:wP,CHwv&O%Pъ!+ Dt)0Fr@ W!vk QYˌ IP#-ըyQcਝߔo.V'̹!DF;kiၐi:?U?Y/_]g}-×I W88i"k5s8LA2LT}#zgy#!biUXgt/a-\G{564:+n"bdp%qw%fZM&o$w<{)pBJT[ 9ѣqULt`Ó)/9矲;TΪٵ·wW»Yad8d2s7YE[1^"r33oYyHy$t4 iFafG`! &*&t1f3Xw-y86ö|C6Xt"UcIs)bRCcgUIisYQvˀ\}n*A;;3ENR^ڦ"̏.U^Erf%]V?G ۙ]폟?|ϏߝN &ߝ|2 ɪi o<7Dl w?4&m ㍇3jk1.m>rø?̏8ѓqݹ/Eq[xYդz^)09:h-UAj).H77Cń3+;Mrv1oӦ<.O3g"!$U$m `o&80)ͻ4ac)9RA1Eca(f>*#VeDm+;ux3FX"Th[fr (aTiF5B'c@Gp!$šOħyJISۼȷ#"e\t3Ͻ+,%qA 3ͤHw4qX[b"~U<<\m >Ok 173oR#v^4b|{zw ө}m9IzRx&{Iŧ.>(\ڵ|' GϷGU!ѯp)>^s)/e&,'@jWðt)yBMfK۝ؓxY0.g&Q2ثAFf}g:*}TGjfPz4Å >R6RP;ߺ]վ",.poSv$'ZMMkuvU":Su첼 ] E%ٻ_*{Eq;1^`5f&SMnz4BѿƘY`T꽱j/1-j-cieLb"eX^Յ~o̤V޿=a ȡ!J,h TӣlQ1"Nu #(,L"F"RT% cd: <"ʄQ[ 0'4ہK&D*x~Շ\ʍKjmᘍu2z|uG_/{32pycnGu0oJ ) 5٠FDSoCQioY=xD4Ddؐ:`. `就D=XyAhm"y! %JɈz 6K4SkC'h胍"(! K !<:­F ${M4PB-m6%p`gfdC@B1;8ª"DTH!#JsDfH$/Z误GsL=)5@8p,sGSi_pdV 2#- AH%h#&:6lρ:܄pNV (=WL =Upadz/-sbD 02 X"NЊPsTiCZ&ߒP?>H;SQ#01a+Gё8Q4"`ABv]6,|=]͗g+>HmN׌zo+ʖlwl;[b Ĉ@u $7E<t]/'#,  wK&Tk$f \L"e'sZ(y+ɘ {JwJ~K9y:jFnC5-Yrut12MۯM:ǕX*\:w fkf3 8׆a0 -\/ƙsܢy H@(&^"-,2ǀTX*9e fdۈڋXJN>fH^SK!FZH\+*Tc07c"{_ 9f,U@N[6e1E+PsiwRMTX!C!y˝֊VE%xb Ĥ9ζV0q㋣ufI%}N7r.!ɾ*llMag%R=-ڄ3dY]7檶Ɋ\7ZR!@aX]2;h: RoРt]qsvfLf6K^nTV4*vڷ^}<8Ǚ7-.S.Mqk1rrugO{瑬"A |u Կ/uzJb;L:Zas":X|H`=mup g^r4Cr-V[z͚wwF+f4x6Y>>?Z]UxwzMLiaսά\4FZkAșk If:dóT ;Iz>:Y:#Az|tr1}(tͼò.ڏ/fGOOc rA*-pdJ+a cZlAG+CR iJ&+{i-u;j5 . gv}qWAFbfs(SҋkY^;#3HR@0sf!Z[|1e4'y<cLJ dCJDө]/ÙƜCsQ\D `I I dRf"Zv &n@<JNjy/[^KayW-'I5=t9~ ״g. XlEܔUR\9i9jdFU6Kp1μLi,%LdN(WaaSe0ؽ1@K &?N֟P9 O^O'nrLe~[={Gj&^-{̙"bM!:ra`kXOD[B8ړoox 8LcI'BL4c"y.[dDxMAЏ ZiPN9Y(C f`՚8{UNW,f۶?y BTl:z9-gkH'BBČF;7!v0&|01d0F_0D! >JD4ÓA ˨3\HDd?XKjqԞ/su#ŞV=D01J^YlIK΁Bz[O[زWApF2VNJXv<gJ))C`*'ifEZ%ҢyE!audzGfW =7<6"^l=.ݲ5km qe@RQ(Ja"h(Y"i-䢵>IiT:i[{ 1s2/#Z[|bU60b_wؒ;%.wԤnfd5S1B[+d_ ݌[kgKqJ@/$R(5k37PT9ɺ88Q7ϳ0'+#5W1ǓzsmO9#cNS#"E.!0A,9*eHވ(0ci1gqZ=Y&]nrB5xwa--Ǥiv9>(/P0CUĠ3WL`(X:ZuF+U2m#3" Ed3)Gl`]^ؠPr 2) Zn!>NSœt~I0Q"1x4)DX*Ԥ@`Pm YfMI0.sX  {X}}q /UlvYu4> ~1!$ tX.rT3\:!cvBmc] -7m}r<]q&~|~ỲiQI OޤTФBr̗Hq혭&*{iDEq}3s:KHiTGˋ]o3y.b!Ȅ U NxV$serM]?ښfnO,weѼ6ţaW}*pw?WAќ̟Ӵ4_(wrR%=YJqV4u,V=K8<]3WmtOդyџӟ/O^7|=<^xuzX#J̅?l2-Iqy߽:c%4LkG2G~qz0WB h,Ei?\]хc>.\ٶ }MnuӽڊQGHXwa,]bT rUNs^ &v6vh/αh&|s̏rz@2f{5Uq?Uih00a鍪8b.T[մR1^q{gIoN}o7_O)__y5'DW&F\o~h.ƛ --3mǸBynu\PǍR!7UKBɫVtYXc.B3N].YbѱAqMggq3iN?MqCYQ CQ-:;l<_In4OTJ4JX#R=-@d0Kl?2m2jww6s.X 1!K 0{pJ  AIoyn8CkmuyhG !l_Lٜ}xb66W6GPN0ƺauF* I:!aG#5Vv=#9GsV(}XX#tgVﰸ|ȭ?ȝ.xw(oZ 7ekc sAyhઘ3Rinz>dpi֩.;<5T~?~uIʛw?6Zvf7G?y6mPit6fw忍J_i/ Ǽ`&$$o`O[Y2tx,bF-6%`d"٬_g#܇uaTfVq4+#r@M0YF۞]\YՇ쀅.mm:rxƷt,`YVMlJj:6KvBP ^ZF3@zhv4܆_Woj{cW!d0aBfܑpivIy0N{qR; ~n?&Ozu` uΔScsLTTL_Z0'lT}|ƶwQ,AG/+$UAN@JgD[V&[brE\iY:S{Rdێui$nx?FPl,ӠRn'SDRVR^XWUsfQI[h)MK-.w:ML_'3bI!qr8LxUWLw}q1}9 #m1M.y},g\BÒlF$tbWA"L1ɐR{BJs0mt{}-Х)`NvKNdp3kO1N/An}y=!律C><õ3T,כԛSns+Ȧw.:3-)0wuX+n{mjA4K|ݲi4^B/+kWS ˺xfJ3+։j,OLC+Ǵ}B=hJMvP04MgMGsB xt6}ifݡ f8$M麟p훟ur=:=LmFGLa^YfX]!Q3_;MԹ^xOQsfS" Ln蠦 So *20mjLA#ziUE~GCt4XU౓*LIg_e_[ '6ށհip<8}zvrQ`P< ;#)8h4"amEZFg<,a .5Bm<&Fs,;ԈYCYo&)H>~{དྷS7fhFJV?e`|cHR(J&9+a&P%2j"Y6ѯ #NrdSmtПEA+nz8|XDC{'ןդw\\G^SWΪD|+rӛMḾm,L&rcz֏5J_O|p P4T777?ÕyxJ$5FG1ݧ_IxDủIˎ,mߟ&XKf֕d+,Lj&^gaqBx6|Xp#Bb. _/3TOStݚ.[Ϯp?N^t7 aFJ8Q4`|wE]/`6оnqS݆Ѵw@z֝Gl0G,$.0|5%SSooի_zF-v{RZ"R?n3(R0MiOJ#-H}h{sȼD9%8e ENQF ,c(")hrE#rIr$ws=v%2,RG}Iвj֗ʚR`Cu_ jHɌ&"*DB3y&??#cnq .S"F[VڨL+LHJ<%Zg%NJGL \i/ nGmUV{h[Ek.ȺFFn#Ms_}VJ|yJ.{h6k4L~s?_a=kpOJ:\#ꆋ^fVi.ݛnCȝ)Mt卙l6OjĀ.&";`..S=7HoWս_Լ,#]g[J!Yw6lc̡~M=TXKޑW:ul-ߥ+$<$W 7]`8Fʟb0۹Jݞ}q= a"\Jl \ 9AG3z ge~D\+K/9Y0u HvZrӂ7c&#{9X {H@,Z9q+-|N&1!F6 c1J ⥴QM) "It$"%M.6x8pO8`9dN!Hʥ1ʦǗkݾeE} t=_fa,@Δ/n61Mml,_"V $5EEŵ=( Nbp@As\!Q" gv&Q1`H54HˀQJ$:%΁/ mV=J~](P8 o1 L,Ƅ aY M#E}V1ø=x4pak|>"F'UU"e 'C1Ҋ:/yIhtZ9H1)qgcGac9U:Hr#TEʴ6Qq"&`#ca$h6o=ƾx}+PKk TMv,+TW$g_I|'익җrV2TiLd%k`^3y֞`?۵'nIړמxvT<k1Œ`)YDA<aV1UJt 6JU*@ b1sj4HV y*Pʈ"$ .>V,8y嬜,A0b۵[B> l%6b+J eP?˲ysf ަBI#36eZ"z5Q|=$+]Jbw?9(&? ᅗ: igFd)q3 Y*mkDabd>,2 B4HmrkcH @03U6q :Upj;n;Y`j$ œm(w`KF0L#N# 0rc3bHC" c!$QÀA$k$1` B( RazQ>͆'{zb_9o<5e, Zls Hך@8;b4H d|=fio6R3'mkѶ4Byrwv!!ܫlUMi,ǥ5swZ祓l(5VK={Rk^2@g#V`P 5$J`zI f!-dOB0ost] _%DxuDt;zE8Llp{ *I:yRdy#!b"c =aoVZVƗciF3 r|YmXiu aWJgE aU8P ;Gv .>T<=5U]K%{RJoߺJAm+bhͤ{3t|\rsGLM.^ܚ!Y-!Z|v\j=lSN\(WuR5 O4.?\ѢsaU$[b|{K]͐flfu*h\`Dƣ}…6׃َc+j*A[mծgM-[;B#a.(>T5bvb\鱵RCnU zݷ#`Z'(k׹+3-i* /|hD"1Ls! AP¨Ҍj,Oƀ5Ha'\Gz!$-yh yU㩍m۬5,i`N:ykpO`:TIϷiW?7ߛo1v]ri imwб)8r59÷GQK0:-$K]dĺL`tlpJWI+3l2՛ioxY /> =jTCWctNw1$ϧ囙'ͷ':2ZכOӯ_O:ӛһ0[ΰ ܎n|δ[4RAu`캥y3'u8uaF/׫;f5S16Iq](T`B: `fpwS &fcƓ'fڿg盓O_>, 3;J̚8՟MK؝yhŒ EQIfBhCds̓G _9ùEBqI}S)Vl"L&hhmϗ ѹ KԳYuIo Ϳ5X}ӡŒvZ"<ۼMCHS1NIgw@4&l tBnJêG Cջr/<݃޿)'ҿ+D`׋gp/m67/=eTHbZͺqn>F?!Vq.|9XYҷ:jz a׫BWSVԇ[ȭIytfHb#FJK~,w1ᘦΙntDK|@LETa5 s#N RWPNWX$hċ?E9 (|BRd8T] zfHad+VS4+RSTy,?pޓ^TLQqs6wFvPpHuM!2jAfP*%4;( iܺ*PN ' 4-DށbinZdzL8-:q7!RFָb8|}(їGU=Gӷv鏙.r}(M3o !aIRmd2>5X Y"H)&2ZjR[ixĽiT\T㞌gnɱuٸݕ&MŴsƻOCg_c.,fIre͹@7v`ood]@77 HRFTn 8}ܯ\nJтoSPW9mlrYV27Hf+`>`Y֭3 V+gXX&J^^ݘjXz+Ƕ]D=iK v700̴qIA['r\୦$Uc}B>4I⫿k1c[iEjLI#-Uwa{9ݛ`8U"yϬ$>j*5T71~?FS6Qi{왇-m[UreBC-_?|L+c3A Q`]jG-B]c6le1: ݀v 3~?~ю2 `t /N)dm6xB'<+Da ޷b! (nkmH%`uG1FJ%Y^2Po$f`AIMx&D0htlҾjFdcC_֠?= :Uen^ D`R_U YSYra<כuȘf9w()@砕Bp`ya BNssPJJk΋"Nn&cozL}t2~Io\?E:bu߈Bu \:ǣ#A 0N cQ)iȊ8y Z=/HeP&D $4Y- \Dу_)#uVaX+c/Hl #ML "z-`+Kۗb@fE9dƣJZ+iR-a'KӪJ[E{fj+_̰Ӏ&32mP6"msg( 0uZZ_ Q c-]0M@2Jhͬb /BBߗmܒ^Y`ڤ5Փ7,f+mfģRd`tN/*QK^f7(KuE, ^XiXIhQkM&|sz'09ޤ d-Ĭ3PL+٭څ]|`Џ?Dy3PBt1C&MkݘJ cٔD-Wͫ¼JF gTyˀeH]?5?ϞPymJP(u]_F%KےYZRď6⊗6[f˭ƍ#=I?2"( -<'H J`8o(RoZ3yOؠGW\49ڷjFo0҂ e QrLa=5x5!FpQ2LDbp@uD8׈ȨJ;-&F(zgzLJ@B6r9k}W:l;(}Qf +! _G\d!tuir|IK*t.b6E#QV*d26Z %pG=<dRAR$it)0$Fs-S%9O/%j;U5'V5;," -Y-G!#'څV՜BՌKIƽ+Q9 **Kѝt 8DU s0BT8((RQ"KEbke1ݒǺQQ oJCJ%@2[I,\5'Eos0`8lqAXE94,LwϜ!41K^Xp*Vqк pHչ ʀ!'Dt$yMg@HDsM,nA)@0'1)\F˜SR)W(fI V5F]%rnJju  Օ$nƍQW\ҘۉZ*.]]%*YR]Q"KZzE:~ ?w9Nowb=ci G3+7R\ETJT0i%҅NR sS˞p[31pHip4y*7Zi`(>pdCd3uaodMᓚ/> Kn8,ji=Ħ$8ut60 5pA)I+-Bc&\m ojV a"\Hl \ 9i}V /- +8*?)&hvZrӁc`P:۠SlUtN{H@,Z9q+-|Ta#b:h08SXP,^JK(z"n)q$" -9{t2ݎSiu' ߋe'@]\&sH ~ ?wo)`SgޕFr#ٿh,03hic}큁i,ZKb9+SҘJl Zf\ں7nIޓvx6FU#)eJT0H`\VnXIU4(7T\+IgcVsgF*bSa"L!$ .FlM,gMvJۋm [P{@gqsh(уLm,ˮs̙q'l} %M0G8b`DiS^O^J:8;9V#x HAFN4H/^r뼓6HbyDbo9T6oNkDe7P12J!TL651K *[w 籜xU7}xMCFfJ2/B,نr:h4 m1iD9B\ RhH: #|aC !6rOt"'Q \# MxJĔ4P*lXg8ʧ7I Dtrtqe]X,8kY[ \** u^kRp %w؛{$Zv3\n ;B q #EuJc0&Ep (`RSc|J*;s/0"#F,'TȁwVyDޠ"(RVEʦH&{W7 S;v`)\@/gjf\ZXP"J(дV 1u~e/&'&#Q\&tPf)>΃Z`myu;KiPolk2S[&\ /bb$p@\>2o$q0DCRPdLA11M`ŗC5<;0Vj4S hOhJS;oV:+"HGP)RqpDa0b ]!V04>:̕wLrc .h QQʹC,C #tYg0u#@q& 4X#2 PAy#rYR)vL'.OǾ^o)‹'ԊGᤱ?,Vʈ%5~3,B35âXtlpe=7k6ۅ.AH[,ϫLG8KF[' E]=Yb'y0#HlU^#,4..M'[Yë((=yBI|ƅIn]L'!D@_uILY[(b53Yu2Q9|@KG}:r!E%7Qz440UaxU? 0 \ N>T<=5UMK%{RJLnRG f?Z F _ ?+Ԕ.ťÿ¯Ս»e3r2g7y=T/vx d49wf0~eUj֖ooffd"`X`Duą6{{-ǖvU:{MnzV\:N X|7T5bv_bXZ^TZ=7NJZMo\'(sf\ݻ 2KT_W s}Jo\6\^ F Bw[7U}6JeEk ZЅcF]LoSıaC J fvoM~3g"!'m hmܛy:cCWse{fEFpNT1(:fqDX5:zʈUQxNgn-&mGڇtYĢ4i.\09%*ͨFR)$;đġem]ځD||!C6O|[0됆. L4&pWTR1f7)r=y?zW>s$5?,@`w(V^Ÿke(LKV$ͨb3{%V2Q~7Vt Heuv76W^ iA;^f*#KEvذ;LʃI7ꋭjH6nQP:UTzc4tΔccsLTAUL_Z0'@*upwWlDE>:e;sBXd CQR Dp 3CG@ [ ":Z8+M:ˆ,˷;:֞zC꺴$_}F'XFJjL:RNJ`4IU5rr1LQp"*iKmT2i%ҥ[q2S,^3M(ҿ,U%[#k\y1~*XGs_ 8}t7O%<7h%IlF$tbWA"L1ɐR{BJs0x{}ѥ)wkN<>nQb2+;Ǥ?>i&Z{~y}D%yrP`NnSn{ʹk{' 9٘? .~*т۫i6Mceſ;lN0;|:\Ӭ; W'O,XPkI9޷;.siz3yK*>~ n8i%BѸV$}UU}j*Ops)Uux)|:/(ܫ G_~x<:C~,& UVгJk5`daV-+ݙ$W՛W_3\"R)KJ id@s::0?q 8,#:N& L"F"]c &X4 #LXWE7HKkӦJ` bl@&‘pʱ0$S$6U\-lTe%wU3Jg]\t[sx7Ű <[gNTaMr*w&Ψjb3I*usq4̺|SX R H,EGgpP $8XqmyR 6;Dtk}s2\)=eٻ6]rZL]j^rh2 O} _e10 22( rDIZ'iRZ-a=3JEp+}TMK:ϼP]Z|ZP7;rdjft93sf6Z ZMUȕ-TZMOPitM,3 xo2h52Q"l GHȎg)iux) c`N[+B8xoquk>@вj-AlX:Ńߠm7/ЉqKQ7`HnH,$U4ξiȾ9 'aXIvFL=#fюgŴoy+ؼ1'4w[|xQ6iWTS tFI ? UW PY tHALDdI)" r6E&!w y?B+J5W~ 5x6{fM&#'_aJ F)Zg3g>( hNMI^*جIZ'ATh6qAD6{T6aۊόٶ d-?@ԂH CIE LKv*=ԼKpj_ ~"xeS3ӏ->!lvhRu1libbhI(THcL)T3dJI7jEaD;ꎻXգҞ^f'社#$= l,Md8f1$n*-E=ɽ 1ɱT6NlD'NdCyҍq[{=#*tT5.ݷy߾oihQ #̥nM c!W0j>FR0ݕ0FJ sx/1dG_ן4ͫQ 'FNV"YxMW@ui/N+P;@:HT/PN:?bo, +}ώYyτ׬fUJ.hV߼u_j{@Zk%_Ur%ȊXd" ĻH3譌z#%Zh%Í{.R] r(*&C:g& 7Z9+@n)m'v\׳J4h :sTP۶jxwFK -o9dC&XH*%`)a5^t&1Ş?$!Ř>d8̭ɉB(9R3В\&mN@sO+Iod9LSO&gR?Yޠ"~ H"㶇1o7 vTEˉ(*s@h5gFDll \xFuL 8dhpBe#U0G@r 4#gNhvyǣ4o+スZ-sS#{pE _yܶ&+m[ N'}z!R_`!%^M{oяo Fx|5S=~鍆6O[9Һq^^a?/qX_2\^ WX WQ'x?{OK4/x?YT<l#wn,AudSء"6^O|fX AHpHg9B%bRMNΆ9:;b#c Q5^x'ʨ#Te@$-XsBI[,2d&`lUe0ƂPE.^2q]eIKR3m)rj 44 IVG'z =7&Z2Ǽµ2FmTǛLb9Vc3?rZWַt~6;I§TLK7N=}v˨j~Mjn2{ɞAoُ(s7sn?7'y'?Ì->]ϣSu~uX y+z˚/n ~gyQϢ.',VicA GSxY8kYS!W5jYSt_YDԕ*u%KmBj%S7V^-RWH05ꪐ[c]jwtuUTSW/P])ilS!l[ rl*jyPiE^8 Z=U^ ïe{=gF! +^oP^ׂGVVwnʓ!%ɥ2Dg1G+uд g48!#xOSҢ~O/nwtɔ j[K D H*8Jn~)FAނ']pS`}<3`=$ {-5Vw\)7βf6!C~5v]O%p,9<\99Ï)˼_$n&*h *L=uΆHQD͊H ؔ Z&ad%%W 0W@c+X(}<39EhK I*mkBC2ߜUϓ`Y V2KOP);ÀCj)!djI[:U)T rg7/e<ŎriUb^3BdӋͿS ]n䛒g=lqv.,5V>^䃡,D xtUf@F2*pY.Â*@9LM`L{"Z86/HKs.2xJ /S >dCFʽ:${ƬDzJrpANcҦs(SPb7% Y'4N{yqME|'.^vIɷ$ ~T:y _G.8r*S]Jkׇ{rV[v~]Tߝ.^ {(5W]8bMKܪn6~7t/ma1ٻݽn#Ϯ\]xs58pK̅lo]Jdմjn nIoI.M÷OciV=PLQp.Ci4VltخY6붵X嬓R2R&gѸBBè7ʽRx^hVqPt]t+a%oWo]F͢|JuR PG!c/.~{4_N^I/"UJK-Y-?]G޾?~7??gܜ??LSF'|ofq:S&W=rhr[q}^9Ȫ R!{o;mZ'V.];`_k9ީ"{W ZN}a{oS|[1\%/&A3Ye+wۻǒs/rSwITҫ$BT14LG9tbF]Ff]F:˄T.ߌ::"K؟m  >l^z*vNZP*zPR1@ Ja)?9ޠ7^!$M.OVZ?5mwzfLϼdͼXԽ̣ʉѦl[ѦֶNmp:쩍ښMgg%OrOX" 7 Makm#GEs: ؇݇,6ξ`kD=jٱg[W%Yd&0f_TyBw܉[S>ǓC)y, Pd&`F:D S`LTl\rsRJɿt=r4h9wsA=4uaԜ ddp3rCגڤBdF鄶wqϼ>}'4sA|S:Daڑt ;V]94>6Mȏ?y:۬BU!wMYd\/}MNDk/5CqRFb_"ᥦ. T,UN ҋNt/,m,Y a_H"ZFA@JgD[Vkb7ÊbY\iYz,Sq[ģH:<6>_?Ϩ)XA-D]B KUvP}L܎e@(8(mT1i%ҕ;NsSDX)di1H77߾%}Wݎo1{r1}j|Rm:X!'QW<BÒl@me_#{,$R -Wr! ρ׽ o]v3ݒ_NSyLehg^׵ڦ—r?!Y ̂.+rS4g\;YeekanWт (Uo8E.! .F2`|=\+ V# Fvw,Z^ޘzWm'm-57l ?cv:.tiphSr+.mW٥?TZmqGRRIc= ׾'mS4RgbjXڅ4s0 ENQF (;c(" EA#qhD^QnQBUBx }}Oo&ʑ?[_ }o[H6% ƧmBWBg]LTDaÏ<U:\}(vJy$-ū~"^u,с{e?RVZJɊai06D ԛ`uG1F0ŁKFpǽdBE(| qM0aZ(y+ɘ!sDJl Ư4Z+eĢAlC|]E3T:3OgLMPYtu&.awsm_o]W .Q/_a w2L8_FKaG;xl0nF&o@'7f_2^|jŇ]…|-j4 =KvMmuNlZ>LUpo`)Ai|s|}M`wmn<[ `/#iM{v,Vl2$^ivn/*aMV;?5gp0 rb*qWT jZ 0j yzXDž<> t#ASX^O,ݎ1V33N<>pfx8'wxĔPϕiaqԖQeHTrʒ42BpQل X|9nGXeX05汉#3g01Jј #LXףh:1 3`̍؀L#(5 ɔ4IMnblm܄-wU3N5nO:-7M|dx{sէ:Z63L#XV⌦&JROduUX R`2X%ȋ&.Hp9ʱ țll9w()@Bpya BdNs>wqlWE{ެ23UWgܥWlB㷟EՏ oT%Ts;A`5uGGX8XT Y?9s'UdYdE%ؒCeP&D $4Y-"sDуvR&FrHf"coHl #ML "z-`+Kۗb@fE9d爠HZ"i^זQQ[l_Ӕ7845_۳3Hnb3JJMBpQ*r絬_Li$ ‡Nݔ՛p}3a/Ʊ ;T] 3 ;z ]YO}%޲( V_)KaS68)fCEY_ߞ^Ӗ2'<dl$FY,pJS 'H7u'QPEhb1ݲnc]śqެTq=)-yWyWt*#y׼Cj';z=+NsUxk㾃vshx1W7B"By+@*u3z=epP:༦3x B ADRD0F,g0 8Pa;_*ll>˰vW/TzYփe>NR^iO*#H6|?VdG,sX*6FY@jG$:L%c-&Tlvf,>zBHGk$`OasyKmKj \]5er2+hn^4\Exh9h/%_ fd+Y uo6<#ظZKCgÅ.$h[eiT[p5]6۶hvga{sg=VIO :e4O/ӟgSwk"~EMZ*|fr7խn_0&M*8)|fCjs<.OS,+ECmsc ԐM*MET$ȋ,OOOnu|W[hF\9S"!ئQhRLHJ<%Z祑SB=WK[Q[F>щESL FFn#N#k8;';N#CX,ڟ_`ũ>ӡ[]=ơ7gh8[3-UR?Uq!i+uuն=Lύt8dVn'=0eu쫍!9|\0]<X[:A+e{fKsӦ2xmΕ9??qk,گ߹ށ40欋UeL@,•DV,XU3QZOC 1I䩑qy:@\RyߙH Ø .wm $f~? x7îOE $%;~gM#eQRˀ,~S{9X {H@,Z9q+-|^fIa#b:h08%:,^JKՔm[$$Z25gI)SsFi< 7:8 c vn <8S$'[ħ8Óp]g>iISlZ]DۀȠf(JC:P\`D$W(8fHƄƙM;ymV%J>Do1)N6X ²F$V bq[8|iLؤ|1B[ ~E5N[U!CC /8) )b+V+wFCLHcY:Hr#TEʴi #Ed7LDAO#JcG`asMS0`lg}7P]sU ė/@ An =}pË ]RaAWGa*rf͜(ՖȰ15WՌY\/ :*P}$0_jrq)ƽf;]FiӪ4muo ^7Uq]cKC͐DslP4_t~jP>X5?8QvdĠJ[JjZe( :f>8 FI/ saD[t+'kk4B4K[8\LWSq҅/VʤuAOb'Ʋ&q&t~]_~+1F &L8!8VF UzCT|kYKץw;!k3.Pׅ ~\1Ps5F}:q Tӏ6~Q mA.lii/ieIv-JlلQWjڕ,L-.kؚ{4q_)|k 2QMҶIcj!OGfҵ&LUCSuuocN(O WM"W߆ɻ˙Ƚav͵[.VSZli[9Eb"gZ\5|Ct{0VtבزsltpZ5|>Za4W%;X~ I zԻ*ZmnvK}4MD}80.m>ۉ9D;}˲UK3NEhrx,}d9qKOwv\fҌ)ڶ&٣]V[vUx_T̵:>waO]t> $'gDs(ǘ`,;CK%͡d7Z/hLC }gYhO)#'KA>C`)ZTk`[f6x7_=RJdLx ׎˪rJ_(JDVck0QָecY>XV}@IJn AZb- C2e% *H$e$x *y1VRy 6J\Ձ*A b1sj4HV y*PI%$ &s5go>ݖǃNpxЮZgE䰾$d4k\ysfo aT(i8b A46e "z5Q|lXOfBvpVYR(&:H/^r뼓6HbyDbo9T6dDi%ֈ@T( d")L(@mׁD%*[s:UN~>x}h7Mb,hnܻ/9a6L#@k"_’=D{\1\f]~{_ 3x߹P B1Xn `,%PvB1X(P B1X( `,C `,bX `,bP B1X P B1X@P B1X( cTqz?]沗Ϩ/3 s.-H P"j杨дN 1?lW?e/)&#Q}MΪA# Lʋ꟫,s2gͧ۬ަAig/MfGG9eLA{ * zgy#!b!H38Yu.ˤj?ɆtDs"ȁF۫[Ί9ªqb"R ;p)l*-!C2`9iibG>:̕wLrc .h QQʹC,h]0ƶ>7gigB5 #*(OL1,WX5Z!%=њ†юi16y)ddEzrV/TR+ZbE+et?R!KV(/rUmo\Հ2Fl]TAEտڟ34m}?fO|"A[5 >ǮSPu] Vwmz p(df9_}6}x\!~CÚi1Soӳvl^t N_|pa5kkkb|}M!bc5ˋ-l!`(&Ai8U-B:{-cKvVnuMnF\.j4#b.W\*jtvqzP(C{nM FO޸gsWgf\ۻ ,hw*3}1.Z޸r""J0zKWwm{.`|Oo?yo﷯>?|xLԇ߽WpN~2 ɪs ݄iDl|V?^ܽjLrU ͪuZt;Kx.o2N^w[J78w;7U5LaݽRqtYѺU)QC?_2vwAhz誨dzcPЄ3? in?3g"!Ā-y P0`.![&vd`9V8Fф$3!o4!2no䜲dQ[pnPb97t"L:h:-p#gaPM$8xQ淓( lvn޵4u খ+2ͫ3s 4taTiMoAg`Ơ߻I׫, 7- 2Xn- t": qLdA#͏___tM2>Y5ӌqn6m5<Vyco0>7]ߖ?N@ornX < Vw|!5wbrV0)&hZ4V[bZ? H, z׭4tccl_&Zq*rjQ9vSJO?h>|fe74C|s4hsǂ&zK BJg=UDOer!"Qfh,YxAG CQ'v)R"81CG@ ̷ٚw _ ?b/۝)FY'jǷ;;Od;@\G?g4m.qQؾ)ɵR4WrЊ-RU_Mz6hf|A NƇ%=no34еG-2o^PiD;sáXy-)CW+BbgKx5Yf7x}sGXZ VjU3LSPR3]|:;,j0gK ZƱbx0UK*ˣ}#mYX)"\u:'Oq'GjË7va-m`yAߣKUxAke Zu{`D#pvKzP/ugv(mrr9QI[+HZ"]k$:M1?Os>f˗,mE^ׯt^?{FJ]p[m8 ^g#ixoղeY7۔70j(V|֌L5F]'fO^褋fuHüЇ7 4 . ++*Z* %ծ yFJX[XU.}=/"e$w]qѯe؟1jabW*[g_[ʙ~rWLIn&ܜ;qt[I@6~bgJ#qmNvX|9M}Nhi+hNryI2H$``n IBw $g,ޱLtgŴH{{vUzhQϺR[͗i`g,mBUm7?I Im'MMnDݩlsiIU%5H_aюOU)|%z{ϊdmiǜ[/b~/kڎ G>&18g1y7Eݞ7Z]vS<5.뺈d}Z^f]xj'OcEͧ鄷㤁oۓM+vмr E&XDhmM}d. foZ|29.1!B$>?s~獴W;?7 AWnZ4{*ݚxQN++`1Cd>5Hĭfô&@' ?qyi}ǜ^_ 6fJz_ $?2z_N}Ѷ7\XMbuX#ouS!:Ha~3APley5<h˜Z}oqփt[ syeQiIyPjAQIIM xR0;/&>}_`8#|WsEe;B !4K#- 7W].ȱpsssUqN\K9\.zlQ%*5_Zb ږ0.mR^*Dž.63 ܫ i:ۅPR%@4.̱⃲tB<'?.p{{5s, 2)CԂ/qԖkY8oQ}$;(\i-,۠,* AO2!(8bk"[nayi֥+wVU?ͺo]5n׋]7iꭰӲ-D`х-zBS$f+IڑXr^8(KA]p*2haI^\!kKƒ9  kȥ+V\۟@/%T'gcd@H .0nÊxQ4츼Tƪ';R{U.sz~7_`5-OpA&-98#(5\tGuA$3 !`2 0&2QIL&X J&ژ,e2"$DSRI "RT ('4[H$cD2$%(N4_28KE3t(,:FI=wL6I$iQK /kSQQUnm/ʬ+/j:R_ߎZ|FP u9>GL9?s~B+ߠydY#}4$-4=8C9Pk>˳Sg[LSqs6U3oQWU3J'gŃ >`;\Tc)lKszas]%<6f`d^Oz3]ԫ@U;rl8% Dd`zr gxkN6x!WO2uZ s6@׸)Qw /b{柾 +ӡC͇ n@I*]m݊h;yշU [P2LX`J.'L+l_yWc },J6:r!ԼMWď_+Yn70O_oؤ]P:f$L[t(RbFgpTA:<8M=ؔE4dy'm{؎0jDA.@bwJm=" 'Ey}y Tn2:h g֞e#m˛FTqd#\&/\˻sKIJ]9&L|Rev}iBw>C1ѝŴ_7 ԆyE%;;')P؁$)t-4dzz\ޖc+7ҶqT#&Z|^h}c2׏%j"ff-:-0 f1ߺcj1z_xz:z"k^Mk 1d*1td2xQҧ!]5.QϷI]{>G*IE*"ȐMj5Y@;|ՖJ#}ʴXTAs>8WnWo!Dœ3͕d$)$ˉR?~X3~dZ;Ad[1iHEszO&Iq:+[wu7-@TGhžq-}Y 75[ ӏm3Vz[s٬|鼟{꽋[ /i \.NZ]u1"m( ңDb:=\ETN M] i&f[-V|[;ug d:s;R E1[,+HoI":T;0˺$e0 xBǹS[Zaf,Y]uZ=rzY]~j&.mDQL{'S&3EJ:'3 X@b0X! .VL~ud9ڌ*xȒsu/!g_z|7A鱻>c,l)vqhF*4X@*{$I[NH%eIYvwů mQT-J~[(yf GwɁ`0G)J9DəaTA:/Ѐߨf$ ( j'$5bɪ tdf#)~ 0}P;a䴤|0[iX6Vq$gɫH!!qa%J$H\ L$Nc`gb al1ؽ1|#P`ko*F*T,=bӡbNJ@emuVY'{2'{2mi7dKNU)#s*U34mb{ 4ʁ7%K-/[!rqSLi!w g*W \Y.4uouJgX.ʹiNOvJM[!sA_a2=(dEJAF) m@h MHHºhU9(T ^i^`5GQɆkVE156O?:(TോA!,X8 HŢ`8㘳* Q1 /kc\[Bl޻"FRBYbR#, Vjp3htPU+rǧ= p#@BٖK6>y@qYE r(E b-C!S aC1 "IMGKP 8RQ+ZQ9ɏ{϶Ş>:wo Kddk,`9#Fr^$u!#siLPĆD[-89LEC=l[AE6uږrEMf+ I^6fa.n?nowu\:ц"2P8wu6Id'rru6hvʹQa_wTte1+6 q0=}f1@"H"EδJmL,Q-Hx jrCYgs͟Wg1{ F,T(B&(8o{>dw9\Sޜ&Z#&|5?ДǛ A34 c\&K^rø?+$3qu!p!OC]EsngLx]cU:dOqv5m7ug)Rmb &d[clrhGW?7@M5Oh.S[%%bHi:5&@,$dPqgf.-&OQt}gm0LɛpD$m0RJ#!qZ9$˞o:ęl-6m')ؾe:SccnLwpyi\}RPS I弖 Ra\E$aٿC^lL~>8y୹{&ɹ9{Ee;# ?dwYZOAx+#nt4pW4 Mщ Ud2ZɑRE B18I97kIH1*J-Pz(+8 pp| vh- ;%HV]Nr!*?!.Bz 5/[MJ2~ǣ%/VSG4\fPQq4?fW&㯓|D4ߖ?/wGu x |4$*n["0AEDE&jM&ߎ[Cs5eoțTMH*Z+GX*Lzh'kf^t{Q"z*-* w7Í{Ee,u4,)xAUZ{8$r 2u(\t:xGJ9SCH"*z^ `B;ϷfJC号'Ȝi>7}1ͳmX.*e6&^ \Y`_LU,.KʅN*W*7S+"༸`pׯ.g梪_o|$z@ +^MLeG~K7ҶPuj"Qg+b['k} ./zQY_!4|RWo/M" WwuFeA^^1& -cS$@B8XRЂXM\\LCמ׮-6jD蛣 u@7q5{餝ǹ6}~o  e޴zszT)%qmod]A;]>yگ!nj#fT ~ZMxV/C/kjlU)ejh^]YZxfZu˫ SÔw*tP-is! %0.qG9eܢ)ܚ EMz;Mhλ֏xGRї8^̲1-4K;k1#WO,tXqoh,xX㜫<ݕ&Ͱ{7Uk'YB {zS-v )9Wc4&0OǞer<>s,UZZ%I˾ʾ7M2./5`/Vl5gR?WVl?VR;  @5Led|kDJ*p%J* "Wjۍ_4ޱd9||-e6:͜2RzesN9K}.'(x:9ƴNg<%O_{dq} mWRrGqWH% Z`J\pZN sfMb>PGKK]Hi:)כBs*.:_P/).vUwx\!W3(]7%{־!e2w;, JK_[`4SEk<ڗڕj_/ݶIYy"Io??,E bܒJpO+e< pO[ Jv}BuɅ{2Ú rTQIfIEA)LR&( %vQjFMA I\T,܌U2 *cNBX GM;j_ 35ݝODK)1fo-^*_ }ȟp62 TEIUrIIq}}Ï/&*&-|g+B.Z:Oes#&UGŁGExVzw .o8ٛ}BͶ͗M=[^jUR>=9d moG <8ބ $Zr 0"w轣YMoh>ǎH2ƜlOj(4,)Rf  JW֦ S:UN@r'EIu|Ƙٍg7gzZMn_'XBeRr'|+q*7%2cV@"d%7_0wJ8 UUF7E L`61APD_hV2jg]Қ d@QŦț"qAG6.Oذ赽]QT6V\;"@ǝiLwOAU~\_ +Y3Q[O@3{ C¥.q2 hRd8w6 tʎGA y1vֳ*2@Vd*M5F qtnhe$63Pz%&$c$I!NB$,Ṿ 1GGFֺ!xˑ,p=wڿ~c>]B(P G/v9),g3QΏEzm@gMgaNwZι:ΰ))W=/]GMl[g_|f%6mՍ;}:Pho~[|h%5L{!׈$jѴ=\0]XgH~{1+*D4O@jrX TUjD魔_}ys^JX&70>ӽt4lèP,;wۍ&dz dDߤI T4o0*y*qJf!!|s~sOaI@{shC1'q1O7raL٠l0尴A~' 39[5V}*\9o{gwq=ZKm <СxCv`:no뛂 \M>m"2qɲIakډݤ?k|ZZT*gcOYy;|\Kn%(_:s`X v ;S7^-`#z=߾P'oX]RYWh|v.*8TqOZe_&V "Pu7GݙO"W*ʁ'KjDk<8@R2//Oհz}vHe\%)ICªW\0Ew]2ω/e#n^_ٛ6^vb^&v3Ԣ+ys>uܾxdH, R,}ge2)QX2`Sݛx#]%IٹtM3z׶q fNzPͭS- K1JRWHQJ͏Tj4ΆÙ2J`8_@Sim6fi<=j5W'e}>jgS_a k6V1dɟ~oCъJb3*@Y}}Fs3;86f|6voι BJiK8N{2 rDQ`aQd\ܿЩEY3VV:Î<޹̈́:IEvj~)b٣p>9#5gsO)u2;T^F J e2!d tҲ2$VV1.atOph_v+}q/Bn['y]C/2 MvV$E:tr/:'j J.T ps]_TN؀gZP+ 5˕* Ϡz ? iILSi9I#̎,T(DR>JLNE֩ޠKo b˞m?"Lߍ'C}nȯG e=rf!ek/yT?'VsLPs.kƅ9H99p@f!B30vp2 %LF􍒓JheK!h &[7 dմLkeV(K)^0fyWճJgX5J{؛rC\w0&21bJ:NMY`QoW-2U =nZ bHOhWZIF$![l]ZA"rjN Dl9b4~ 1rAd%aE,Zz9\@b6HL=Nnǣ]5 +=tuWfhd}~@Mo*>Z$x9xV7"*OD%m: _R]¬I>rn%2oo7< }_ZWNtlYwL\p=adѽP,-|ϓ9tXѽAt0֎3lzO!k.lzhm.`O'?{e2ٴ_{V[P,@u&qOy Z=Pi\_c!Su`3⪐+yWUVCU^\Fqe Dss҃*Ko_6Q 0?~}|*;2*X`d * T`!iil8[_ayMB`28TdebL"R3LAM %Cܜ)`)Jox5 yєnj7P|U6+16o3_+״o~u3Gomuh ?=( XZq|@n%Rt`EG*EWmNQWPQ{)?nj*Aء6Z꣘V>|EKiXշ-}Jd)U#K,lM* v3(Lf ֔*h,-})P>CJW ~f,uJ>/:ZKG%i RRO_ze4V(PhB O?8%(6fa'ef>?Wm˗5D Xi>Yy9\#nYa-,Sf5,g2Z} tu뛴EֻZ% W-^}G+dP*2̶8󸍧Ⱥ8Nh;lj`8^U+pPpPiE_!/v;;FD!inHrnWn-^vԀRXݶƇ}Qr2amD0 3+lWt[VAu[R^B&@m&fAi$}&PӀyw%XXy1R?LF4!?Vۭgr[XNFf7uX/`я~Ԃym>g,4p?ayGv:LWWO)|iVO_>ߜ6+á̑@WCWuo}+,;/s.WQ ]9O"s(ՐqtTP*+}8\xnp<_$Y` Q_ncU+9¦$!L" $IP' A(*RE !8,caTE:bA$QJ JbCA2ICT 6W~v qek|tS0u_OpR{|޹ M+Kw/]dz/R[|t]ѵ?W2o|DkDwh0ʆTs!}z`9rpZzRNdAKw.7ݟ,dtUxCZR(%q,AWrcUπY<+CW._MPNW%g]!]qm!]"er(]jr_XLsm^zsh)"K$(呛mWJw7ϡTzp2RHomub%BӕCٷ#ʁZ+iAVf ]9BW1wr( tutliõҕl7tp5ЕC GʁΐB :rp7_Xʡ]#]T]9 JNNWΒ--WKMYb0QT208i-P4JLdIʃX) BeYW]uݩTU~ےRk \P,}>;C2 -}%umr;o[xcZ)s)*{tJЕX3muU1[@.rp7t;]9V tutŵ?2U?h}ۋ`>b-MU;&c_$ C~3$g O ƟB2h5﫽C)ڟjϳoo*"\!Ұzt'T 5TY}|z= 1\E*"\rhUJ#W ]Eɩa#IoBJ:CR\*L{CWwr( JsVT+l\}+֪ӕл]BWۓѕlfxW}RCis+ܯ`\N#\`5J]%]I+;W05 %Ç蘇Arp9*ؐ+6HZ"&e< +K+4e#w=Jsp-cP!t_Tp]kYǞj9}r(Uk]@WǪI'bCZF}+VӕCa3+.#BFCWC)@WHWk ;#M.XIsPJiVEũFHL{9Vz ii)?*u(5ܼ3tbGt\k|+D*o j*^J 3j+{*wr( J+zZ+]!`wpABW0wr( hʨ`O0mgwC:GW `aЕ Vo; tuteWwϓ`F?>A[~kxKLI%l=7"U?J>/{ T. y.ؔ羳5`3q:'>7 FѯwX,WBM&8^og?o,7-~ƶ)z3+.xȀ)j[n~)L{ϟdF梂1){QʺLO|O߿vOB4[" {"TgD-lh4~^۹<o.0;rj΃x6 0wl=C^sddv_{{4/dz͛ NK.;xcI:I"| 2M9s8. ih'>A-zi@|dω&n$׳l6Y^ex5Uk,OxĞ5.'W.V{rM-TyQ.dad[a( ڔ[Jskd̟ t>]\hQp-V.*#)uH+)$Bc9F8ItI7쾘?ltvtPinWLy m|6f.[̂EkCέ/I\L;O?%|T+_t?#w]yc2 &V}JҘWT^yQmIի,Ҙnjf ؆!լqjU6>_'ֹ USd|5$Լ@4,Z (5mpӏF&3,O(HSMrye-\'^1ڽWaG3q^gmf˵`CGue.r9A}H»JN >; 3y< ZgD֪@o;t(&L$I#iA0J-?dZ$fO', : fOUz3%$d|YXecV7InohWh)]p}?m{Io.A: y `R& CRpPu-8뀢T]m?a(\{DHUBE[NpXe qS)4֠ +ebK3Ȯ8T"r9bLQs  K+VQ@bU &wiN=imAӲL[ +QoSy7?vGن2Ze,;^Y?-*²tO#|9,8NC*E$7;=eJ8GKv3J}D?m}ɢ%CY[oŏSI(Y{ [04OU^&wlve8Wo1q~I@(Ĩ[~䰹^&Q :^@ cŷ*xk2^PJmlhed< 3j0ޝ"{&V(Ÿ;DU?Ne뢱 :L-3Jni͗Al2 s!'7h^U|yS{Q:fTJьZjz@JdM n]Q^S#ZrºO%)kre"/WW-onJnTVqdTTx{BErGUqARNEWQb`ZMĸj,TK.5\}|UGkщ8S-`!U[Ѿe;aʫIdi^{6LNL!qx [{8aɃ#"pLڰRJ%TO_"*eRo-![H(fn沚4Vz'67Vo $cEr3#G^B8_l/(> JV'/BӪ(ɫI5B[*YSɩl(eog(~&(uc%0@ gtՈדY.]SSvk)U)!hNE}<]$ڼak;v)?\ KX$HH4F<5fQ^EۋўTNCφ5޷Н' q낈ZM 8ц: V؝^`Zj5fYF Ԫ:YmzOt .#5󕸮G>$(*+v5$]_0D DlYr/Sdi`iɴXHETwJ?0aFH#"b)]唀<fqlb~vINWE٫6Y.6&2v`Rm;xQⲳ{u==٦(\X{ZJVo#VTU6/~ ~~9$ ОTH Pa Rmps*2˺jS$|}B1Nϱw]nF==)}QN1a :Uc_XྜvE1]Zh<0yL5]&ꇋp"X$ $8+H[=;#k=ʝnUKE'Mjz~ ^"I̚n8{:Uwq>oh|U-3躓E2Ji"&ĜĉD Cz7 <}Lcx˛a?{ ´\y"1NŝÉ*ȖDIAd$ !!BaB1Y#fNEݫp:Möj=yI :ހ~jse~Su#z;mv:Y۝_ss=Qq*q-qV){lpvWo)g"t-jk\y F'EXp !X9|H`}ޟ,e?e;L̻"=(R%KKCSZcl7qd3ƃ^+kJ\I_Wy*ng}8QwGǫ=NIoi\mt=eB+k(.MM&TJ4E=BS'1w%͍J88¾ޡݦ_LDϛLG .Y-We[J PXnUS2D"3fzCBъJH(}X\iձz{U7[Ex+H^-jVzV2Բ[sp.9i)nC~,WȰr^5P%u7$ң IE}]uED1BtXĆ5n!׭K:8!a]U|F c^ݸ9n<FBޝV3u r9_Ʃx_j nuKϥǏ|QEJFvc3aҽ29A)|ÚC8fTH*Ir:캙T E<߄̀fjY u`Hˆ{JOc0aTs^n-JzTg]i I:ݨA+u2Ks_ Yߔ.,S']!n DW4Ɣ= D{z$ڹ}>kVO#nࣞnB3gHRZ`pU`>S*d+29>)Xa-E<x[ͯp)'|P'Սǵ")c dӌ0a"`mdl0bؕ;w@de6Uܝj r8qȸP!C#F%WC9⁙οrl5˄'+gtNtm&6 ]2٬V?T*EQ G7#-E vms x;0:1$ѝM\fN|'N~IbC ),8"^@ƠAgQ7een[lTsI@LI%Z2shR>?ՠ;vxHpȯA8_#H`vN=S-?Z8 I y$+ V.,- G[\1f,7vQPٯdJ)Udž=BH`V>ړ+ݰ85ݔ 8p2\k|A*8bF_W^ܕfC٘0#tǁuU3ux[0rE=VK>oG0)#й^C!K ԬU:o%(GK =a~eCOTpV ϋ] F+h~.51׼aB@c}7.j9Eۆ.IߌɍnPF(8R`pA"㍵QTFa8H| \-/O4hm[yٵhXE&+[#''+'&aǺ+bUްyF.N9 98VJe5y!5`7 t1_ǝGOc.5t k 1bAU0X-T(4Q R&z>w*VSkmi7\{F>vk>!bZ?/ 9:%o[Yդ61T%1!UKe һ}>eڜR%`pfGaz=խ4Dgaq㖸XQ+ְ T+@ {ABMirS. gVw p|{`ؕP+%o :_.8K ՟ ŮWR]Ru 5~-= ?~ C\[$XC` f*ꎘ߷Fva׻ !KwHQ(mVpe, g#7V=BBж ?CO%מc]9pD-pQx |Sܱ*;yXCC¾Ѹ^+Ey~ l{6N r(7#Ñdnz-~̄p ybfY7aPq'`Bs=9o#.Drھ#} Y8M߶2y(rCxU-Ȥa pR%c/D6!.< z=r`63˷)U=#VY ;"."c;P` oNDːe:k?ɿpQZ,+HeRg-U0*`W)Z|]*rS~n5DYe!@V)uAojzOg- 4ЩTȜ}䥻o/8**cRM|儓$Tyv|=̓RAy*g4uFQR*GbU.DHx3ʄy:Z7/43<.Bj7K (5e!+Q_"L,/o+ 89nQPrሇKm.(d|2aV2s{C뼥taƅw}*]Q,͗_BrSf6kX΋kA 'I.pYKVj.% <~NL2O[*Nw8"\$>9ȐdLƻzw'*A.¼Uȉ "yX]-S /%u:D*ܟ7DN;X(ӌsIN8(Ž "3nڔgjޖy|^gNÌ8JJ% 4]M\_^ /l `m?_kM.?!Y*6ļ"mUQS v4|D4gP+3GQ(E\(;)\ݛ`F"\Oʧs+5HyUƹ <:wjޝ"HQWո+"N\yj$QӋq2n Ż0/3ʜTC ܊UsUAEa^ԣ^:*Y}UޠT UO Kّȶ6|n"bn秆;=!٬ZUv|dVeX8\4ŧbTG=J3hޏONQDBw;#n)颲J/OS=Y\?:ŕF'C܈3R/W3&J(*K{ WH ~ I怹JF4x3#g4μGm\T>j%BpE\]1`{(LFr aTH]mqֽ1y>0^CxA|kdy[PA3>OgPQⷽ=TjddeDOAN&.+M4wURc9]>G+?Ĩ)Ab?N*etv D"Ӫ  XͿ3{ն᷊(adz6%釜fF@b@ .8()-R 9iv!,Vn]{8*#*ް>+ l=vh1)z3Ve;[*C{N Bx˴x,=ñ kܔ$2f.?LSB`P ! yr N?^^fpߋwƬ\je޸MfPE}8!Tx @2ThHJ4d<{=)X$m)?o*~>'{=7iXOc^MZyՆ5nѭՙIRz{礷<_X8`1ӯ]l"Mcѡ}^ëLs5nլ $*O{ViGkN]55d?3^~oafN'<2Nu48K(ͷGpl|6Ӑ#U.ّ&~9F do6o9|@%(ښKTqČl̟g7|]l,p]r TŷT XF)T@#N GY аք4[ Nݑ7~[9Acq )" MQ$IA@{R '&V!78aQcEEWrL`aoU;Nd e"ܒK X緦G8ųMLOU3mc7B1)WmVpN|Є@r$X1Ŋ4?=Inn,m4Qi)i%AWdj%T3NZQBvX8^!4q2gHH2WtB>1 NO8f{{xB DF<@uC]/D=jVUXϢz2HZ-(u ȸ #;Qˌ$uCIdb_ؕk;c64Dc .\xÚ#q9so ;!KOA lTRT.R<0 (MS ք#)'$V"zCjv^CE><̬엝di]S`oմaphw` IJ*ټ%x$Nyx6G;XN9K᪁v8Sv4Of(*i2|ybl?F˓3XnDL%txFn` ͘0ޟԗݗZͼ5 .Ӱ*$TngkYE:g_HaoWqR8fh<=G.DNJ6I^ׅh]X s>L5EO=0Q LI~2-Č+@,ъpPqw -ׅ<*W =L[Ż+KWyhȇwp"Qb/93qJdpu u",U+MܤX2j>S_F+ 88 iDio+1;l蟗~|p8^ad CvਜJ(6ۜD2]@Y"9-Zͤ_gź's~gK࿄E:8|7Ms0 )gE2dch2R7t`ϑ꜕G"*c%20Jdf 21D[)5'|cdT"A<O[r L ŕ~EwAGr)ҍeS: 9#1Zbߛ䡪 ʞKaV9V-ی7~('(euX9>~aij_\![)s C$XȔb5^&ֈBbx77[m3jHrnj5Vf7tǟky#&,~=WV{DҪ]%fbwv ]:NmF9U!axFL!R *e<"hBrJd|,&%<*e5 *h}~f r b*q}XJ~t *R2mNIj7]F<3NuA j%2oԥ4=`ϡ1*Mh#d}&gKj/A{/w _Hrcʮޓ|([t+x^8#pNN~T)l2*^&;:meEt~C@5Ϲ5lcdq붶q6q2%VXD<= t0rZf4":ovmXdﻫ._'"RWbfYAA\tB qɃBF )CؕiʝtߍTZ (uѴ@]ua#_" V~M(WXWh$p9hsEjf59xՔ)1B!Nt\QQHhH;`W+@d_{NreĊV3_EhYwazΏKS{o>_WS(i仗[-Q=EtX$=SwNO|Qz1we$OO;Lg&S>!ʼͺ9OyECDKRt˷ס(-{sӠɃwOqMCp=]rs3qqO 1m{2љ3$\"bȺU貍+|3ŷSe]߻hA]THQ}p,}5,mGX_QC[ΦFD4/r Jdt/ e;]Nʹe69Pz2}\Z"ڞN n3$Rj5Å47OMy…ZDuoZOS:YD8z{kNUp µ}~oTD7$_"J*SRZFFnYe~SU6EN(@=\J-yXY'Gx'\>]_!Q1z, *,e0u4 B@>)g03Ҫ\+`D+]7&}i+'B`g%B}Ff͌/ij8fVP,k\ijq@O~FZ>$ͥw=77+v}9eK 2J8mQQB} o v݀MbzJd|@a dN X4'|E+?DA̶$ ˫䬘 Q *|eha~M?ӂmulN~l% ~=wŏ38ag?{r|tJj2#l~۫6 bu.x sh̊ .}qCA1ִ>b@M4VgW$pVMiAISmMKڈG_)H`g7 md'tZN;ɟ!#%vzwpHL64EtQ bEXyKBt[\w4dѡ,+ƨ ] ѥ$Xv): IPƘK:]>Hܕ|ecH糕Rg8jJ7fŶ:bp`D?K+RgdP!m io$o4Ȫ,J`~eAl[5K%?^9 \t4k d}ЎScO'OuXIZ0ިfH;ZQBr Xs:q2H2WB> O7W,(hy=EDg DFGy[@T<`1Y" KC%i!X_t.ܾngNdKj==?aR"z #ic#O:C$;wT_ x5Y~͖11<~D? `,!\BP1;iLDS \jIKF\ G}Pg?C6_4}6k'Si)y:z.L#W'~a5tm%Ak:/e!?3MݍǏ(*[fsN$k&Xb%hE8lqw*W =L[_KW‡6.AVV%2:6LMeuc>iT)J0M99I1G#d24u_F+`<{w7@x[}ezW"BLs+ت]GZswFNWE:8|MNs0ے )gE[/dc$r2pzG"<yVY& 7x萉!J1)  ;:2>r4[WWL RSkեr)ՐQm]doj1zjJd|"*膯BJQqowjM~G[ݱEcEە V*!`9$#lL IƱ)hk l` VW%_ fK G$UA&Cܫ/-HzD97tcc1O.U)x=WI/&=H_v*;%JhV -90W8`F0ػEtU)Uy?cH]9$+l+͓,cȅ0L*fl-[R%2a}I84Vgnh)}KZ,?0^"wtQ1%hbڃӒt T4'tkk '4q',qaE_+@uw WkKbSILP `9+ଧ1 wT"c?!꾸sFp]E3=@旭VJPZxyspl6QƯ+(EvtRet37f.o7?ߙ?'%\zy8уxsJc4o& ňP_BZh(ʹbJdl˦xEH܎e_Q@?ozfu(hx;8yfNI߅!xvjut@wq&I2g2!nT4 ?9`M#bFeDi%F x U,G"qR d2>_4ʶfS^k}r$_<V<6 ̈́E4ĐDԂ?Jm–<rR 'U{8$%"%H_F3'u+ Cogٻφd m2-~M?ӂmXK74~~18ȤWa>yk ǘ͈wIYlq;Ӕؔkj{dU 򡌘 UPӸՌ:sl&0 0һJS![Q ]E5+̃K>Z ps0Pst @,l?S"9wQ%Ǩq?ݱ;OhaK4 z SF)LYC [?i:iLfϗO5A4:SQpA4#q ri&Pf5IZ3ptH vzV/U1` hAWU(Ede=ts|G٫_6[4;X̌_CO! Q# ZU6.x>?Mg+\wi FA=V*НdW9g`ukk<ǿ/?nf㗩9RٟYֿN ֩D}TrCZ{j ~,d[ W_r6c䔝DaS (ݜNY47 .0O{?|ߍ8,ݤ PK7|2VWjx?gCG^0- ?c 'hZ8˄ٻ޶$U;3[0&A`;ȱB-RD:3wje?k_]]]uGlfn0Z.!:Fn.h YG *۠Q+BSqA}̃QDWի}ECeߓmw\^> =O>&׽A~x<ʞX[&ݩ]!( fJsHw~ȮUDvwYU+$ghbjcJtsցr䬟-YYA7@>ӏeuek MK~ߗ.ڵF_WksH4_fڿR%vo⥁C'W '*~Fi85rb)Jʭåةҧb1awv+r%:9M$7eDpNf7J:?r*zmVm^DxCWBZ;>=ܐ4y1.7pdXAPXz7_IpCopb ET`cg^*uh pc#r弃=m%Iϡ"=^W%mYB\Z1vYJAK!ޭumՅkW vF56EZF*Qaԫryg4sˤSCe,ct\11"x. &1sJN[">jT(IvXVxBM`\,Y N#T]fi6iu55]|ZHTRwl.S20ZªA=hq*kګڦSX&JtXFh31rPe@OQ ڋE k>)}xtXΧj04̸* 3J0)&9EIA0&ρ)j.3cYSrT%t>J:j }|=8]J9@%G]+L2YI.٦>ѥ}Et@*PP֭

]/*joM%[/[ ~3-B0 4Uġi%$kn!SG Jnө7N3S4Z}rzr\ʍF0A&ALhK&!^eot7A>{L>"c!ez2_.&`WrυR^^pq}[6iHsMĽ*? %FNzH3fՍGu5Jv>!͡6Ǽz!ǔGW Y ֩4oy ZY8|.Tsɛ4wL kqMi*B*uWT=3$ɘǩ1Fzv<*k~J4NHd%\ \u^MxVf,q,;-۞%m5Hzc-Z4]Cy܌",tg]֎@KQ[ǏzHTFkvJ׍\‰~VӰH#tEQ:3ZS݋tag5pŀ╮fC].t܊{_ݽƲ&M#^zM?ܤЅFQQX= .z-~|n{t|9PJQ*ou?ۍX"j<=&Ͻ]j p3s .*}5P"IJ$!JhMmٛޟU6àHYzLʞ )ׄr{%ԀZJO?~ջ+cL]cR%࿧{7?/^* .zO|ʦ ?_hV1$D :x30p}{vyIcϟP58$#܃j5~[ OHLKpAG4hN,Gv-\`r.|ohxCn˰DclC1V9}j>o-yb/$ L(-6 }%X9,X" Pi~@GpXHl [`l]ȵ%Ƌ{xaX\b K,͡lqîsJlwPmV p?UlQk7y-j[+1+䱶SDs9^{b,;ﰶ'pEg$)CFkwm_(m4ZJwo!Wۅ֘˃lJ|܍c 6o~1wӞuĸr?W0Ưߥ1ntkvsp`; oVG^oFd;9irE0` tdV%U:qJ6_n ?6l$-+$ T T.Gwd o҇rp$9qqz+8IYCJ[yX;l*?s>#da&Cû`|rSzPgLJṱKF0G냧lcm eR<Aetm${tm8&olsFTsPVguPh77 #f+.E)Mzۂr?nkH>G\\xd{h rOc$3ϠT2˻F.u9;3GZK !-?R|} a8\}]zm,9~:2SvVvYvt2h5Ǖ] 9$K/'?,r"z&֦iLSzf#)MnwoНh:J.09VVPuj=ʶ+Zm;,Y8ĸegSdH,Rqt$Y`Pr\kk&+[r 210ÕQO1n03. =wb2l 1Kzmn}d}) vuOb΃ViKe\4$G?UĢOv&єgі9IT߿_HGl>Rfl2 bPek4t\'0Pq3Aj* eIhт@N/SjAZ5b$ Ȫ~R~+vW¸]L ]RKai4ᄒ'BDYkLqMhsuryN5eTC*`blH6Xq m吤Ed%C1{?|%abf· 3 B$DI>E׃&DY{#Wt_&]MA05Ri0¶6--.z/@8CC{vnr>k9W+”A= '׿ы7M}o;SШ.P|}4*]&.~4o>5`FP ]l/*&>e E{t6uSþڥ"@̓^+ޑ4Um % ?}mOwC% R&t'ܫ砑O4-{1:? T9''II^;$oMmd( 3AsОIR8DJ8$޹dḚ"LrFTɸZԆTҘJJxsOS^`!LCJ1] e'J6݋aiFfntHIT* @1\y"IPtc1g]E]ĖC4 qU֗k2yx ~Yӱ$]@YBɲ|n&(-1А)fF.ʧ, 2K*py&'O ) PSJ뤵NeRN2@EN9d^h/yW5/WO?$sʐ7?03 J*DyœRUB7tE+Țfm%RMi|P{hC"]ȅ:q19lm.^(oh|d#FQKȌQ M #[a Fkcx_6Av& TbA%ggus嫘NҦj0eTX2yiQYd1zxA[E7Fyujb0l'(QEߴh@<&'L uIzN$?w2DƔ*PgXsKaGdkdj3(JJ)iˠ78`\VhTd(!ؕcNOL#խuRkv6mw5Zeqg.2зqܸƮVbs{{m9>CQ_6;YH0@JM8 +kt2,JBSr2(zuTJ/s!2cݘ-)QZt9[7(%݁n_ĸRTql ZEt1)I^KUReScn~:`}LcHߪafz˵J؆r4G.U ^V4`=gOKՊDV1wl99|Ϛprf-=8Y q"Db3U*.-h`FahtI``gw0KQ,01c{H&3˲uؒlS$M[Xkuk֯e粗{/P&~r>@Z(\%xBH/kJ7GK^B9[:1&ZH.Sm!0,Ao>.Z%Kx݉\ V7P^iZCl\lP axqک2 q5`j4}3옎y[xXK7ld+jQ3ZU9t^Z!d6BEׁJ%Ct_U2!$"st9*jD޴PcT`gnbY2x|w)|_y79$.$~cvIo^ߝ\ 7afFc` JOzFd=p]KM ZV\kTh/k2)Na$"6}W|L.Vs %HkonJͤJ -^`I,"*ދM(vԳbigY9P0{yfRcyڢipoGqYY`U#N$\y@J ~!yXʷ";)nSQvb1AZH0w|kmd<01LdF06pHRm9KP-q`)aԼƫNʖBvX*=D+(i\.9pB"&rN$"mh͎` R#OCVh]:K5 Gm:sܲS'l>w ja.1JDD:äiNG'.M-q`6g"TkvɞC]&K@ȷ SZ`1-6 ܵZi¡ѓ1 Kp#iDk%q!4Ma8f*1*Ϭe$9KIL4XKm]?ڼr*i u n5U ʍͳa<ևlWXxW Ж*F 'FBowRTp$mv{z/<Ʌu RˆEVPREcHr| < ؑ.Lg@Hn~ twk.Nl'al}:=e$\âfT$JA e* G'OzӛBy:W/Rkǰo^#ar ƽHms1 ~-O^nW׀NIEީ&ߩ%9I26g'X 5G)HqM+h,v\Z?rfٝ{E'(k\kZ0a) &Hc 58KE)2VKGw+`/&H_8Ps#tSOC()Od9J$(cgKrNc3Kry.(B*ɼuQEғFr:?9U~WtLaJW(Z8V vP ^ZXQONKewv\ç)!:(1&+(!AR[ʷ}sXt&nW"6v*/:^઩4ARhTv2+#ؚmNkSA4^uBq Xb+A-d]<Ûkp쨿xBJV8}q:k/uNhʾ".zP]1)do4SH UCi3m Q/,U_3О B|a$`HLIz=/X)b[dFe@>啯OyS^WQwlTZ%Xj, ij2-)skZ Zn)xn~,ϣѰYYՒlJCǢ@ۂpp':+v'םkL ɜ+P\V7-=J}3{?KS>xlͿ~\y2񗀛3Z觬RFM%rX\Ќ ŕTD]%saMj\;/i!x}͊nGmP1:ɯWb_J#z2t'&H)r҇/⦬P )xb3<z:``t-ރ;*U{L$$I[K'jQLYh ?ߥ_87 ;v[ahh|%D CAXF0s6.1t1J\;."K!O-Yå6CB8:di099&1cx\e\TXH8RMEI^ S?ꢒt3BRCm Q=.NÏx޺q7˧{枠aA D(1ǃp2o6dBy7'Zմև~Xcb`[m3]qkd-;l)A+cS.ʰL6NEW =?涆< +hGI̻֚/ o#Gq0tHٳ=AٚN2]4^͒IA>F]?`:X>asLfA&Paޙqg>(u9;QFWo|-_-oay;aJ x̜eJcx{YRLOBroUWo H e\qE Td 8A_b z$g\MF,ljѕTd0 >E@~s#Xͷ`.k~'gC.5]RAő_,uu\If6W kM6_ON:u༫MS]L`mc 8 a-o騌 (JMÖ4ش(*{jYO$Jb$Jbػvj5ek_+裮Ae,͉ƼxWTeVL Y"جUkh}9')"*ŨAds *XQ]bDEx?I;Q#yـyb.Y^ZZa\zYN-Ռ ]xi-ʖrAZ5.YL@!+_6Ck\ɸLf=bfJ UPJ\bbTvyhN1.FMp󷍺0'&]l8V3ƘA]d^`drfayQK9E@ϘO||529IӲc3+afѐ#WJF=ʻv%a#d<^GYڶj^aw K;\xj@!#g)nVdo3]v wّɱVnCDRVcG\N9`.Y[޲[?.4×_/.abg'fo)zx[wgҏJ֝o+jC'yɸnAV5βUd&1bݣ{f;/e \ǚZjh{G>v\d&Yp8p89#;  a5gJϚ^]laCyw_`sz-Mɦxsj喪|*nkv4T; ML1Vd4Mm.%nr^y`h-ӅN>gPi@;oڍ\ۻ{nNt_4ݒoz8t|W}ڏVZj:[H!~>y 4.v:e5j/Hg'Ch].Φ4[Eɟ56a傪 /m4^ŀު^Õ>yrw^MW^z| ,k)*_W\M/߽:̓ohۇV)_ ZˈMQ outk1&g |ǗV9i1ADñG[sSړ:pH\_lrSP;.j= aCc7\NtU1o#x*DRlhۉm)2/kǖ]Eҩd*BbnHXې.bJ2 Q"j8Fk$ Q@#% Hrwz#x 'Y; Y( _ Դ ;meH&Ih'h>xtQyss@ei˄u"Ҏ{&.H0$vpCyt|~^︁_KH]ЊٹJ .>):~"ZG:^Y}@ `XKA :~Avƶ07gGcsY%H "@ d")h xfX '8A!QH5XkL2p'@ghMN;q?M ;Z4z^n1Vu-K*4}D NO?/UOga⿇km<ʯI@-eUlW{˛^KoRw/9KmU 9PMnфbNjD Ns7[k{_ wwfC;YI~Å{7aWSvjjjp/_}:{މZ-L%1,jŪKLOY.b&Z=X@R&tzKv(ۜ1jJ1j;]hRu5V3x*z& lC^~Jq4+) 7 xTN9#U? 0m,ZnC7>0*E{%.GCD]_lʬ& e@2Vv# =xYt RXg`*e0މY%O[v{~_c2#͟!d6,rLv8xX5,i| %b^-,uJ]S/EbXnLq%\sfq S٫5FZSo>a-y-9%ejf?u_آf 'G#trȌ<;gSsܺfvp.t \3;nh "'p~9QY}T05~K=ڸ{<8c,rN)5ƙo$D͑SVym;L?u`DR|,:ԖMZݕ$7|' vx"XZFVt1z)z<Q}NͮCnkۍVbm0MX7~<dۜ7q`}mjZvKݽ벊aW@/'ZtHp>qUwgU!rm 8Т mТ[XT 8u kNa$ [Hzkl+xE"CIVbJwZ7Yƥ+Q&J4e}1Nٯ]+ћCԗ౯MUEŕ] s`=N$7>RoM :@GvSPBVjRCCb]_W<ɌzoϞ}\0-RT%T*sszF Fl,B[ϗ[aPn%t[!CjNI !"וԕn(*$me44 Q航NeB\ʶ?$ ze |8O\#$}l}ې@W}<'s:bXy՜g.ݸJ#P< om?w 60͖|Mkby6'ŅBF拑y;{5OvcћNg~@d7ONň +~?Zx<}p/+/Ok L[sO_]tfm^i?mcg|ܔ˞Nr/9̟6bfG ћ-} ;[^mm}!15/O>bm b*+R&cbdُ Uג+}q_Kgӑ#(w*aY>{_7.mn}t^MTu9r}lKEvMׂbP平A}̳Z#bMCBJً@Of'_$zlֽlLu]9Q @϶] XKOWa AJ%<'e|&AŹE&1yȵ1cL6s@t3#{O(d񐽋UPAzYCͦHG=EKEJ?gnfu=їT ^ӄ u']* Ģ.iWJ(:s=+zћq+2QIRjj%(S `&c \Z>D,5H9YrZNlzh>m--T\ MZ]"`+|b/Z౐/amW\-20Ex]Z()$)uYf]R GLȳV7vKnxJ=]U$;!jMЄ!T#QXe܍FRKJd^. yw=1+y )^%8iF9*Rvf#c @r<aű<)wuKQ8BtFY)) C V0O+/Ѱ1唸k"Ww%X4z%XxTnyq,%/30R>_Ow;KE~VF·z)Ad#ͅ`18O3E2na%kBPֽ'q 1e1J@a(P FdYrhQRl%g(@ط5,4o nI%­>mZvm&Ot_ߌd6=~(1C0sm?yb,Y>}p^_=Y|&o۳z!m ȺUܝӥ?j䈻Z\}O$3F`Tjʮc֍KI2~H{P,XIR)ZLeث)-њEYGc_.>*ĩ׌5cOF׋Ҩ5W`f!Zh?PSTKB-d C{0Vb?XW#VGbx)E[+!Jy DK%l@`* ?he9+^@$F %Tzwwz"yRMN>35σ:؋t_aMb{ƳUuX}wPƦͷ %ր0< /Ð%ׯNI^TM[X%cyt- cj\w1kQ7O_Ccu@Rίmb]k1qW{ CHHvCTH>Ǒ<ַ !y'qKKޗ6GnHgK?"JAL\N W\sNü-CדԥX@l @?F'r}0@]ߜ{3+:u^vHٻ经 H1:WcvѻZ{a! B6ˍ T| #XFQCɳ <-c,} {NDžك?&y79'dW$uv)AԬYn5@3^F2 U ˠ熺x[Z0s,xmT3 -Z~XȽ;OC(]qw "B"[7^"y)_2AeP+4:2Z9P(1>2o|'$Ɛ!(-!UAQ&n|`LK4bA Q"CByY q}xsKV)/M 7$s[;>nH$ Sz oȱ[Hfd%ч)[E_UXK\s[)aZ\lIhT)e~kiV(QLC}M>wR饶24od# _ {w$q5JYi¹xM*@{͌JA pQ{~o;ҰF/?\>^y}fvg]\IZE+sCyE15L j 98J,2F1ߛݨ,ޗ8V4Z|\;awh be"SQ^K[Ҳ52Y9Įr$lAS>j1b*PT|AmL_"#/XfڽybjC ٍ-wN\&@$]B`V?]fKM< %P9u?B i!TZj0@`f7̵\. Rcb6<<:f#!Ma:|eng[A_>ŰiLj7_[wjS1X0Tr7!Xǵ%b u\<?[H阍n|Vc?;l>/f͋׻65k3*b&xg-SVt@gZzHr_沇A`A0@`F#-e٣^//TRVe얲R(2d|duP F1,zMg)COx:JG\?gNIrRU׻ gTaH~[mbE؈.Ÿ:&5bۏRv"wuz#viUn# j$`OƳ`ͽ '㷭 r+)QUoR|f%謿UE6I$Z>_ޯ"RPLtA$fDFԁsQ_KM&3RY#Wۮq J怂Q}}լ\HѱE%Z6)gF+ހ=lnXwE/֬d#pC4Mc4E(lALݏ131 #|@ںJ-$qOŃkq/q{nѨilG<$OFLx]fxNc^#dj;ZAjAF8;C4OB_@Q.4UKMK[}T^ *d)'W4Atcx6kx>bDD f`x$W|񍵢NC!%pdY[U#RV NߘmNZZGv]ϲ3b6N!7;鞒{+i~*!:*53皼JȧZO[rX%u|Yt%"Ezp^PrÞN?jg jgnq)!q?2;Q)h#nj@RAA j5H9gŻ%Cadu~]V 3EU+$.1}s6S5/jr%`mц+b=LՖklܤ@2vİ!:aW%kTh獽m_⇁ri' _Tܫ ;^!ZB^Wּ~3NZ]/;mC.XujUu; !K_^iʺ}FY{6qHñϤ{p5z5!7RP> /%ެܘh%Dž3 |: {uj:ϩEş\3Q}zQN6<35܅nvYinXy82><5'dњ^qSԶrK肓g|k>t1)ZWnc#S%UtVnuB8! AU628ƥ*SeL%oP-ǻ5O&h5X w=% ˴ÈpVQwˍoz_$1g/w߮gN`;;2 -Q4l(E9m}$O`g;`&;dJ%C0NugGnOuKڻj4/rW) ~g97˻7y3t]NC~_1.Tbuu b:TX:s;hѕѺ5PlKd"'2bTnAƺ#=Yъ齌=pՏ߰6ӉZXR*M5TL1a+L%nq; 8i kJ̽iRaixAݏ!bë=^0sx 塉xzyU`uqLP~r 6!`&.ˠl,_XesCr\AlVN$-:  ;(ZQ'a//!@M*ϫ9d+1+0GisH:4,mZr1.*I H?ԊwjnB{SAWu+EJӹim->L3 HkPv%{":M8e"}洪 !dX7U:i^@^r6U6Cg6p 5lXLR 4V\ZjjOI"rGɏ`,Wwԟcu#ϑs*<8lh- Ї18Mx\:-9Z.g/$WÊHM**[šV%8AQ.A$pZ [шM0q%8:clO>:Mr>T4]rpA}˓ך!Ns-0so웩z6d RwˢCb\{$1 Vyڐo-)aO=;Ì'EuɈvMӽfgZPrh n ΄)L 7*GTeQu~ qf1s|'9~xr R.eͬ^~\T#VFFa NDY/BƘ4G\= CԢ8 JjrɊ>3*^n\Go)nD@!jX]~Vg8+rX+֚M0Ohᚥ9|٨!"ck(N{kG6S%)mպ@@Т,8A\I>H@HN5k[.!,>)Lwlm l߮y:8[]H&{8(Hq&w͸[t$61E얒d8BD9ڦ!mi\t6}9zِMmӇ>WkJV kn)? xu~<;evE5]z̰}aPƼ {l}M{;dDzSr.\2;)-tN֛<44ۜB7tt仕lov{O9`AXٿ/*>ja51|OTyP]⎣YPhrMk9Y흩jPr-%[|c3@T;@'{=cbɓ'}nhmc 6yf_prG-Q[ ( Nt͠M"-uEhi@g߹80m)#薌$GV֧!A1}ƻ"ܱwwj5OG5gIІ b"{gXSrÞ3dx{Rzވe~}"j0//.zl]H BW !4l‘6k+pWk3>ZFcܴdS crZ0 yaTGv_1Ͻ&6։bPh0ݠDoYY:ԴR gnFȝSA;wkL4}lGĽ)߄u!?{ȍ_X\{(UA3wX.~d{+=3WhK-dyy=dY',Kؒz>YȮc;JCG| kE!\[#qz6&GsMgohΌ%IY˶ldU;) yhnJ™hd;5ZB*ٮd>a"l|&7H*CRɖOqM6imwж,<ޥ%MpUvel }n`0Tp'Du/|ͮ|QڋxhlD]uvMÐpF9ha_ J~ ~`#9`QY ,nFRkwEډ»<;o2R&^ϖr)*%`Z 2Fh$* u3 fHEhYY,Ze6 IS65+=,h@#Gr1' XcJZvsKݜM-s[\^r5{mm;C蛓x˳t?ۓӋ4sHolnbQ'UybQ'UEr 3@V:vJx}AXiP/73XHuWNP+'k>R|T!\n:ן+v7+k6!Vi髮;V5Փ-})ED6\OՏ% *hTj\_zUL2|dT7+pg}vhZk٥ʁukv;5)PlkI8~1$[Sy"6{22 <ç5pOsKJRk>jwEtd-9dt+%Ј}f>DDږΞDvI*Yᬙ*tDv-䖍%S";]-36ENS- + ~0i1 ԄÃSPڡMY}&' rF?O":rsd\_&NT!?G QWnVXe/C8r}%k-|;8EҨ#Ixd;HGYmDjcOl8amKBTNPǂpB1W,ñ77b3_.J *djdyp5)QyCJhNvָ5n_JhǢV `^3RbF;Ǵ[VMP|˯/FTgE!MuЄ#Јɳ,8d6|Ɠ *D7omb>osyƟ(TY!/@[wo6z yJc8~ft=XHUoQZ:4hVaL>Woȇ&[\F6UP{q̻~?̤x"I;=/W}'iɆ}=Gg1J|cIN/WNgkU>דt1E aY?Ƿ|`SALw?\pV>u/{^y..f?я_GӷßC8g}F)w7-S%3z٠ʟgGַ<RC yRd.QX,@@h%E&`BUA2lT%tHmm@,dӧvξ_ƶ铛:htz$`T;|s ċ[Û(TÛTc:6"v(.icfkfCwzh5ʺPlo`}ƄѺx^yan*/ȭ@5v`j[>4jd[OqQiߚ6ya #_ͣuj=?ӣj%b*mv'Y|f=yBWb4'T,-$ȰZ=-sq BS[MImm;5Sr; 7sN2B|'WaatjatJSB$"g#~2;Mhbuc!JDU`rJd3h:y!coJ R&jI]*:u "y11/"+y&9[K=Ϻ#Ö05JT WU(iB%ϙh}h")VKek渲׆BmjhD.DI:ɪ-VPBD5 i>E@"nr1h~g@G+lM%8ز6JCjZ!D* OIu|YI pr*&4JAR55d %# J*OkUwګ,Ś"_g1)>\Ptޛ&PZ zm7(TZ5Fjdq"ZP9D!V*D&Eg jJ>^#sG"L1XaBQ/(xmn"y'T j(Y+h" QAdmPH75_un羝ڎa~z7ߝ>j`x1k;M07Al2!nˇů1y69&+?F<ƭdpݭ9vjya|y{R5&DrO3shd4T),% +k~^TsylgWː,1~FHdCC Kȧ0 QF!熄S풶Z+`BB:#2ƅd;DkXPKbt!3VZ OH{݂jND餄M;ɵ){OpqZʟ]~Ɏ*߂= xʟ99q}Z]'H6$(<{bEN# k:!s-+ mh';NPUVSM gb2Jc_2=ZwWU(TbOZ,ж 2z6+C;"VӦWعn%걶ݬ6`W @Ԯ˓Ga㠱̴˗/'6&[ŰsA;"idyI;gvD, ZE]e@zvD'۾ Yk 8UJ_,q0`S[rz{WeyDjH!&c`&3'2c:1H+Uwげ(€V]ȰFPI{{$$j}H{;LGI}TRJї2``sAtNNcש1[TKP-jVMO9XG2νX92b@.)͵{v#gweӾ1;ЩⳜQL3V s)4ey@.K)X -,6^0S,Nh  cynj+<]^,aV}l U0:sh&4is2i)aɕ~۵%mxhB{3(H)%0yṬƶbv(۸#} 'yZYcKGDHLi/2&T!&S<(P]v]&ڵ3~xh4-_N`]k?eŢ(d.GnRU+r Ɵܤ.WYÿRժ5a1[z4h{10'ү|<.3'sǟ>9}ڃkU_X+zZERwЮsz&&h/3=F{aA [vb=(rlORdR.EThAy' )-;sl~s8:B7>+JL±_9٢6^?^;NR|~7'&`ceX+YdPTf'%"}$=/Vӽ-񟛓PΖzhhTWҢ N@ AxΖ>)P h \^~(>-oA*kkE@*v(M/u]D!kU)(@*JZ$dEJ1~84n /'c}8vIr@“y^'9Bޓ9.}" S}j.>B[s<z78P2,za# 9(a]t:/5% daQս0&@0 a#R@2:c7 \wuƞã*"&u$v9<d/Ux^gr9;pN:BeY^1]@$\>O9G:{+BX9TI`¦d \˖]E 5)i0ϽU|·7-96oqjeϷ_&t /cMb9f!?&cZݕo o4ֿ|"[㇟T#_Y7ǯ/XEu]7ex7ǛHK =a6j5)y"x`%`529YdIVJSq^0?i +ܷ˫S.^[ 1RCag7F&A&Q5;R&Z]ʲt oAԵxtHX'2J|9P/n:TD+`DŽ 5]P{TFuOTp5)V֤m6Z[YdEp u%vn[v`Y PkB+S3Mj5X BBw"Ch?Z-47ƒό/%wPr#6vQm OAO>.ZZq_j̟ޜ1{;?뷀F7?g$㌑H#04uA(B&@Q[otH[ӯryjn[lst۳/VHoo<$F9X=Qs̸/Y RGI TpVv@:+10`aǂ/rG,%#:MSvP.$ |k^g 8Nqt?᯾}\TwWF,)P8N=I-yHI^eR[lLScVԆ- V YW@C`48-{I+ fݑ覾/.y pR 1tK3s`cm-5yqޞ6@hLDԕTFޡlP<[*KU@W'`_պ"klo#=FЛol mM$fvP9n,0U6Z8c@xu7d_zNana8 S"yJ*L Q2rDbVK'Syj:>OMv"ƊTQGEMe,"IP*pC2h#uiLw2F┡70Z!5#er\/q@jh+-J,('%[f9Z, =<Շ)?׵F;z(_\^O[qvoG̒Ʌg.T}s<C' Cf%wq`ϼٰ5ٙn{ AT{W?TE %R#}\ LHx[EEz-ؗs<SlM{=ѣZvxJ͹S_'}սƨ1eGW,ЪI0)Z0;z*7u"pKϱGa3ph3p *8oWIڳ"տH/EV [uHss밃Ī]ε`v4Hd.z\Y9nqlZ.0V}y.uơ' U4SŨN=N) ?9o\CVҼQ]e}̓o|JW8 R}`&X6&joɫ`C8JOkUۚUb-u(!_U#A&r$&NsTz^Bju[n;0m^{tَ\!$tKh[*:)mEYY5O>h72(CV:@Y*\}("+<XXJP #V-Ltc_nX>SP&SP԰)(j\I+ tW8l>]XFT+6ۗ5m|)]IIaxdz;gWyNT20t.o 싚|Jb5Q^WV=*t,L+fϹYCqs0Cм\w/f('bQ3a]=Hq%P:r ʩY#KurYnv6 oc'4Owm=sc( NS rD^Buf5+iTwFK͸nn/}A$dNz٨O Oxhs!R~=jywzlhR9}'=Ʊr9@WӐ:`v d+ H9@rWYg21Wl$#5ͮExX;s=*rJ;SLqeLJP58JwmI_!Bl|Fl~ʌhGR: W=%9DE81%S_UWU]]MHSh{tTسwQm0׀s {p(-h1G)WuO xV1+9fƵ`R51yctcMONJ ""js`J%!\&&|A1H~3tsD`5ph:m@( 9vV$A)Byny1,U cUk=?˽0z{IJ'X!5 {H)ML*FT CE? tfU`^p'rXWz8hЈ gRQb FOCzYFzYÊ$եЩiv4cZ l!sO#/) qCW6 BB6&2nr=uHԜ83 Ats*L4--5GavϜٱ/iX)P;fôPt}Eֆ]uH4^qs@6gthCjQj̩ r`Hn g dv {iJ+ǣmop5-|t{?4Z&q?&_ >`3G &xiM%\Bb~vg˳&7 Nzh$:6l,ͮyvH$1M9Aa %T>s6 jԥ*f)HP[I*)Ma>M)M'KvjXT޲Q3S[sYC7VPI@DHp0j _ = i xE30U/BĴVH7Յ/ Hfw_8H [\gHavy>L%âav@5}#׼RE '9A^MQ4/Ԩ%~INČS¶<A=#4Ԣg)TYK=UcJJ08cZ  {KcG 6ZmISȉе[Ynie72ݍ4 LeHYfY[#Jݡ&Q'793p`!^ʈCHQx I\DY:aH%BU31~Gs<2(2xD aPMK\),DkFpPjpj^@`C0s\Qе@\L N dJ\j,;ˮ 'o+{/ĮRU3>9Ax+e X5 >j.@;f:NH(J<4]ݚʹ||˛ȟд]h~k&IA~#꺽]҉`Ɠl~{mxgO3}3̕UpگE{[-H:Q eIqT("=8nV )D!j%PM! !RIfB lIkvt5;T5|0(I2S| A ,"4]e[?mݣQ;P&vnN*b()BMyH춹7Ƹd {vnSbkpYmי)Qe1_wyt꤭s^t£/+.!Ƚ t5hjxj<` ZN*uvIr{ Lbu3{|o%3r;Ryܪ|Dߪ]~PX /K;Y`xET@vX{R֦SAXIŷW/j-I:wZ ]EVqM3FΓev㌉Dۇ:{ppCmR+=N<34}ȩS9|Rw .bR:ßSY:.L../~).=z~.V}=]If<H3Ly% iJHEg-t3~nSb# !,v+ȥ)9l]I>L"Efbyx*&h+x|>U*,1k՝an1#VEOA'҂$) N+BLCS8:%ޠkHS;s?PɴV7Z;^ ¿V degE'R*m*(W]ڥ]fɷ3La'O˓e 3og3,!^[g899QɌ4~rZ;Ux \\^!"v9 \:KFOj 6fee*CuDx3_Mj񒇇CUC*]|CЙ$iGJ-"x/>f<4~\c~zC@b:Vy~}oˇy vdlߛ~˗'aidUtө#(թ0B%][N#Ł[wu ] $x$ .3C0KNS6`RBmhM?>p\O6y&5| L]~5KrL|g և )yy=2AepfIrOrGi0PiK/i{10nt}i_?6SL|*\:y^Œ WXQQX }9g۟~|ݚ #(?ߎp4''n`hgB%gVg!cjiX&1I-?2K=$AFaսTOpSs])sd'JWC8ZQq'0C5roKjARU1RX&T5Ho{d`(}<זpX˭=D>8| 8vpRi#FygLTTDԔ8EaQ#aaa"V 'ÓF6P5qsÅ.H~9bx3z\OK zȭMzjqy)S~^=ֶ=HOJFzGߞϬtM=H]qu{~(Zu$$‡u%$3i#aQ 1MqXi3HFx) jva+_&Y,W*u"=(*wξ{]xUQxAr~J;?M_޵@N?df E ֤`+ʁB}fE]xڋ=bz}c!wiL㟚Hˌ-vФKX85cA^B%jNf(MP%:0n1Cʦ}0e6JcfY*B^b C~O2ɤe>gOKeɷMFҩՓa?D3^xgO@OGaXA AA*R鷦nZ7F7]H[!OR$0fv]31m7-*nM\f=rx 0aDfʴL#IfmԸieUp4橏 g.h7:[szElfg"_LZ5;+g|xO~P_w67neS9mM_r ?晊hu+TnP|aF OGnJ`%wT?Quӟ~v@!FFB69 鵳fa es;SFy<` زd}иr7'`J!"zTk20jheT[ 4VcDkl? O7T#>gm+߫^ Nh/5Ԃ1[ղxм'wWeKKʹn2R_XFs0Bʷd rZrځr9@G꣰C!(PVVn_ ^}`E#QM>:Q@%\ F O.D&RA:JL[3Q Q„7mQK$X=ΤZ 6)y!F7+vmΫ%[#3$boKmQn7MK.ԓE_>pl/2: S2j>m3Wop~?B$)y7@~lmd]G \7ӯ]{v0L{nҝ} .n4~# O U.ܾMh-YV1lvE )7ktZ!;gtzC9FUS\&9K;;G֋}l}!9=qC,/sP~MO5$kIy<ژrv{q&iT "8uǍ{3wQ*!$=aҌtOujII&~:YK_2d5dy݄ dSLQat1:8c=Q!~Ha+D7 8Γz 5Kou\R@,Q EZ*"ڦ{:i.9RdYە:ntU^bK㫂E(ѰfWߩsExW%cR'xGh")Yb( Q7mdȂU OȖ(5h!e92#B2ѯQ=2jgy2ƩI\Lec!YVM: U!,y"V/;'.gSM Bݨ>U8aAƹƫYJRDpi(em7դ2rɦ_]z.#+t;X$!+]a5US87/쯻>=C(=rX\*֞Kt.jU[4WI[,ĤY'(X%M郍ֱ c)AUqSٌ_݇]tIQ6KQ}Saaկ&F*_xgj)jHyҴ4&fH;&olӪ4;|?# VUş_< :<(4?4teT܀6xCϴD<91+leGȺ!awу4&Ni{{lWD-+Wcx htwcÒC.JY*"e>J}FYU2DR*@وE @Q *9ѫPF tcbC HB*~aElF .I/c=v Z!1 EeH-ӂFyYm 'Y T#&#Fr(;-j1f}ǔg-F' wZ2fZHUwy T$]*01m{^&8`El:"mQe3N^i۫1;`(3Cg3ijg5scy mY= D=bTǢ|0V`!*8)@wDj"bTEH g$AܴtXNxSE؜ĕжɖr2-A3!;$jKrH\?׻ٜvmJ zt"zD6CD)/RC-b–V`ѻŶmL% 3Y%u~wk<'"Udy<:W#^ή/ : afz}c(I-HY@6)JZ [l4l4J K kO}R'jCj?Q zJMtNvR:0SBS"564D[j)/%RLI`vA6C~#%3fS/c/%:ա-[-BBϢVl5 f!$(┻ R51\ ݗq뽔:]LIV6Z PE>h5S& b) 2 O"nl W2vԬ<S442Ƴ",La•> FeA|IU Y1ĀHbPJּKHd|6/ZN%5Qzbl&R*` nO0 +EwLFOWzeҙJpR]TR{΄uzco`A%toh%P䟫a#6&r۱FWg(-xu)7J8bh8cGR`WN5G#,[mrZ9D4;ۚŵEFR'-mcZ]H2lju=yu-}פԭ9n:]_ZwYKbb! >ٸue~u=ފ[hJ900&&qjzci4@12jyf{b+VO|F^:{=aZ]C2eZõBwJ`CR4s ^J 8̚fM]WճaOGcGlܥkSq%N{lDlf]2Qbӻl ~IH҉Im67ưp X=IcW|͙dq_rW?_nmJ*e88[\\nIp7qUD_V>kVr]uo?.]uKRB}Md5-ˏ-龫 }uld~"/FEDJaXIrYO:#c;oe~_#Ma'. $Y %7mWvYna3|݇iE ռaM8-No܍zh>Ћ C-~H* y/~7mb3 LП pqVKhֳ;T(쨶<48 (4ܨ)h,k4fǻ:ͩMHS35d]/Qz갷IM:mao:kѪR(\ ,ZM%ZJ6Z|JRHA^-Me$"l=/:[ny_ZsxYܿ4W-C˷Y=kSXv@{l=d@펼lrrh0Z42P#QqhG52ytW:8 }tnOW:HrOkp<*ltTغJϑM\(K rAY&] SS]9A魋 ZƠy)*HwombY# :(iz$8י'成 ^iJG0%aUneLr,y A8&ړi;G535M5l{QR,CpfcA_z:T" #AeF9x`V4fJZ tL$ irCn>{~W1q@=$W:KH*,`i "Șb5T HuDa #|<ͯ/_|b JHsgW^ͣⅶ8u{λ}'3: 8ɻ'G9 0m̧Y֞ZȤ`}#~$^멒 ț7SǑ'۠&!5Nqý\&X8qz=e,z)CG.e96'%yKKbߜHOI͵.h/w>{p1{psTad=wR{?|dƪ{2RӃO>n\MC˺v;NrIvί2Um/)cf(n̬d 8tvOn ^wG&?sKI=wuí^$@>f<=k.7R59zٛg_8G57Fq:y\ܝtqN ado>z|֘l|wq2#| w&ill`t Fo?0oy~F{&oh+ C c5*i h|G҅$TIT< m3UnVw!eͺq"3HZj$W+ u [˸v-l4suZ}FŊ5sTH.QzdYΔB0cnGJ'bN6ʷ<$/TZ+VJ!pfmo^=X:JXs|q=!m2_@1: ð+wZmHxn+lON0,/җpuܮv]6uGgY# J|ɮo":0RJ؃.FL#MƬzj?t|~.C3`DN0_ޫäXؒabL!RȦ^>% YBUd;TTl/lp)6`Oc~;kG/ 4M{j}aq]- ,c;L_/ OFtl ^?;ۍfNwogipkmX:_f;'#N}0E2^Cj"ΰg8#a[h#DA:~x>v) !#r))LĀ %3`kL À#%$A e x䬅ΐ oW+A3:2%:dqD_:@<5$pPt^ܜ5Qq<2Z2u*=x;Xݡe~;M\fi.jxENB+OW擐sQ>͹{285'&0+OܮeTXMkf_.OjuÝf2:J2 =u*l+٢dBڢ:$䉋hL{x –vNhD JN["So\ZZ !O\D)B19a5K,>/{/M#(υ I,mr:._Db&6nDn{@6ʜT6 i8i"8$8t|%2|c t=M~Y=+8T: |_Gr5e?/Ur5vj&\)/tcч5MƁ5Q&d8Cu!,ƚ+Mur$eY2L.U#$K5 yH[gb IVHIVB- 5?ʌcM04,~!h2NJ&j`~O!'XcR( LЧYk)%7ޔ ^r|nT&6{0Y V&kk<7%c2Ǝ&+B(X2 ['ш oD[bг_)JJwyidBKABDv(PAIvks8vK&vCBnȔ,)S2d3qRw6SX}Leް'$qZ_F]/;GMqt9 ?oB3IƛaZrT2b6߫n*JJ45 {0 -~H \w=ѯ[Ǯzn7X(Sɼ^($ b0&fi80}R ~]"&FE|dJMF^b]^Wk̾g೺URTØ9_:={w5h{>{ћEżwCL]_m"f{/_ފw1|o4-9Ih`g̽Kw+fv3C[Ηwcᯅ//yNkի!se#HTv.B]-_o{7l*/-&WR QTK$91u(1\M KDu"cR KeQ$yvP=ȐhD[E[G5`:!QC$4ilm-IئP T&uˎiD%oZUh(l!Y*"nouz2y> tG R':t˹N(؂k] $1?s#L2 CFxԴQrm258:Ȇ&rDSj(/1GBNs>S}嵣z.U?OKGBlX 1*tTfH) ef$dk}ٞ1YƄ2K( 3e^jq܉L{"ŭ4˘6#Vqd0"L-i DW~H {E;N e>"yFhjnPƴU0@3Є;>$:d 5mQG! { ;;dA0Ioi* WP[;aF4D*ҠVy۸Mrg_\=hgE * QQ)3iB&WɘsD#*܇Lt4Y{7}wրe~+(3JL#dZ(֫kY1Ug*inII"жyX'5?Eۤxl>"QF:Ķ t=亰I$[MG?PܶJ ]7quoORclEuAT.mBx|EQEZ'F!IBF:FTNL#= ";e VݒPP5|).I^koVlK~ԥ;՗V'$tPv ;,<W2-~7>XJ_]|@q6!Vs@`!oP3eWq8GpV#pЂTxj>ȩW>Z1qBzbǮf۲ߤ8jyWbyx >4A&= * sj]F6E|V/`nksdqlFwX/a &G%u!IS xSӥ=xvw>-9s:$̟ԮaO}N!5} ͖xQa:}잷ZNs.~61TF:*E &4-7Wsh9S"4;nM|\+AϻР/^^G~N//ċT>S{1(FC:;RzX @3fy5gc˲I/n_#& ö C}6u6*! G2+9k~ +0WF@,u0,Yhx1;fm:%j?>D j2r P2)$鍪wf^^^*\Jys$% rZ^i:Th,)nIz2[.-z,ުJ|U|nI>no|yK<æ'R%H%PAp(\6c뜐5[!?@}MG O&O:Ƌ*/ÌedLsA3fދChi3/;xP{$¶h+!'  FrF7eoM(躬/t:I|E`m/4_qYzyR3R%LC#pFR3RP@ Wӡl'R#"ˏ.zje>󻰮D{jgD 0.|rԂ1Eh<^cTsvDz~~~=C\ǣwhޠId6GSTDsCcI|a*{ƹB%~CaXS-ՖDQFc ݋r5Ax+ ]NI9GBW`s(4.$<@ [Xf,qZ 4Ht^퍬JLW{W; C;/>jbն#m̯ʼn,1`o䩯7Pfis[o6/qAuSOI_~̙%wbkIDz:y{=~7,?C?z_'XEhtERk)eW]^ Dʎ!~!%>X9LJ[p;FEnKqd\HO`)l75~{p\+8}kyAZqƉ-O= ,?~1P=zIƴEۉB0Q_DDvQRLjSO Om2AszqVM-H60&5~aH@5 X 95!iM2ac%PTcv8Ekk:.{Oᗞ緉Fh2uiā},$C_'Dq!l#:˘Pb %a& jGq'2 |Xb,cڌXő6D>>T# qIW>(s|[ژp|e|qW ׮d/m̚5=>Q}Y ;^ fQ ЯDG>&>ݱ_iX+;XX _50ٜW 0wBKJQ]߲S4*;VAj.QSpb!#但C'3vwFm!]{jUޓ6cWT!fݸUʖwڝ˞~ȤHXeRmoZM I5n/|}N':#f^(D(rέ{A s@{Օg?+9ߑ U/ԓ>=m=՝ 5)qB DyNQh=` my.55Cfݺ|֌yh`W:QRIxI #TL I SRL>8%4#rzCnntY #IjYj?UFlMF=mT?YeU^.T:WJqY21W7_q`3+<1}2`+ clYTťƹ?٧ioJ h?oVh?܆˺:Ie - Ig"ܐr,lCsu)'AܗnYQ븎D4-ݜ+Ex _i'?գtsAqtzWcn΀֕nmpw΢N<#Xҍ!9抃::F_TҴrPo h]|,S>ҽ/$Cn8:c4ne/ t:ZW!9:ψI7!\qPu\h"zrawкҭ YbCz<ֈH^W{]J'^@k5PF6A[5Aw(o7D-$֔xmbfo[tzpT)6JIt䳳,Vck͋]eYڄ>QS֊;D%*kWݑ]A=dBp9U*ן_m>Y]zfnT'Dgz?_q6oZ#o|l># BsBor'xnke}O=m8_huiegį_an^zdVo|s#gǛdf>F>ef&iU\)7LtPHCD٘ ȨoB08Q ˩Q RLj A5RpZvNZd:,Tf , *:-ҹ<^Yi"AS/6`e#XPP=3s*$WfH`!G [vp4㓬|^g +XpcCW{Pb\\{inx"_rKJ>NK(`e zzL&x" ň:B1B*6\?g\a7sHCj 2N"4HxkZRfD6hEB0 VA'%o* NkdzX<_9NaӶiӈ?0ƚ?<F^^o3"H@oo"JViMUl[k$~Cfp02LihD~}j?"b`?i%)( B#HP8YUE̸_?f a9SUeiF س,Dɘj,2 Bum+Hm迻/e!v}m0I"-އ-DuX"R=*b"{fވ7 (CWW] 9w- edfR6,$]OK`eȵ2}4g_^f­b) ef#9Esr j:)/'DeA4}t$VC  ÂB1jQzA󡾸D-T-( 籱_V/rz!SA%Ur#t=CX·gvJ\R (Ԣz% uNiY. /] !p טDrBSZ":pM_ 66 L_OsJ}Njު & iJwNhr._J.)i0hżSQ_3ؿjޝQX" T{6/' 1#(bcHXB졇kH@9=!J(Kx(&X>iPJuKo[zW2?.ݎv1w'xx\P 87IBО0k*jqƹ.Nμ$$X، V))OL7VĔEIDȞ.BE$Au@{qW9u&G 1yxuh&>-C*hdnn;&gs`U:G՝v`ȑ2O!.QQ4쪥4*DKLY5`@Nj>BپyZl9s;rwmtu]wncܨj6d4xQu(i1ki|y"Tiϻqz nB( !ƅ=ą "$$hB~Dk}·. z$]MAH®Hm*KKbEsZdiw3*~Y͊dn:.[C VGԄ"\|aE#Lcub$1 q rK1Kx(%D'4b/}o}!R8 P qucaIH"6Fp@#NW)jl7Mo֓-.XQ&LbL7Y;q(kvRp C"MC$cf- E@h%T$HI̅4n)%hd"mL #mӊ#@d`VSKHY> (4"j F:NH9W D(8I"V^ &qF38DW#ʹNt ;$f"0C@o7}̓ q30,Q&Kƶ6fV Dަ G vA6 % #de6H`  aYd=ie!(0Y,,h`".N)k.iIx$P#ٴqjnAcpOCp*!k@ (!lIyj"fg85oj7~ҲqqKmDNվxuRv, #MFr3œaxxiW7 ޽.MJd{I&e+Mgܯz fi Txl]jyl<5Χ/kPړVٿ8bs=7eP`lF7W7ݢ W\XSbY 'l]3Y8qgY>P*T}ЙJOYL5,!^%/Nؚ[]D Em}p4~9vV6sJ, C] Q,( "͖Eϱċ4$Z=^/ɪs5}m'OCAz}PjX1M >_[9.6[g=Il~?y'o#(zno/t"'$f:+>8CpK3`s}TՇJ1OYa-!|п*P+ᰯU !X~DSY]"0. L1e l^fU[9N*. Iv^# <^-P"f>;@ ay1 (l+8+:^ꃂz2熫>(v`~H1ܵK&;{b[|0|p#"xQ._@v\Z=m5?GCۦP,yp#Q9t:k_Lb1ݧXvv_ñ6Bj_}<@q$F]NKiӃYC`4x$IE`i:tz,3T ֑/ط9;6M4;)٤}Vr+eKy^UjŢ,f=463IU.IAݘ5=c7+m H >-{E\kqx|EW]Xz.?qN0k5a!?κl݋zEN~A8jLDacē$ A5Qۙ&8k i h2wۉѳSXga/WUkl:d͗@HU >x Y3h8Ta]@@pI}Z$v /ʓ_OCsT]fJxωjf [ԡߍXR(+/0+a춪}ñҍhE F U*E_S>@2` 2<U ƥ1@A3gBdOlAdA!uaд8oO 핧isG0vFՐ +%nUVpnߜ%{9܅Lyāޚ97(yVݱ"*Ob [(@CVs(HP_@=ɖU ƎkyO8t`/xGϟ@)9KV[L?>XEw>U/brYZI EJu[l|_ %eVo3N>Чc./RQTe&E?#mhߘ;J#:W0WѨ0֋4k ʛs*[wc\GNE*NBuib2DmS-awC2KשroҴ…pP=/8K䲕lFH`(kEahVzyR cjEӇ (ύP"*DrFx} Vo=+pMPS(uo(7YAB6nrAHQ4^HIVH zM^ .Nf(-\ƔPg&7EZaĤ&1 jXy(By|\[>8 4A&NtsBB#HF{rmDl|϶9U}qN| 56;dv6ʑdfEIJny1+"J*/A+̘4S\4W.gSRVʡbDʭόșc=s/svrXd{H +PX]/J%jX|MfQjX<MώZ[{_{+LZ8ͱfNno-G.PC7\i8n5u--@;IEey`u0E~ʏx we:P+]ρ1spOlLɝ(zO0a<-4Oo&]-G4v#|6հ7sB~=zzbnJh8 ^˻y:?.}.o:[I ?x#4L`naC7'BZG\b.!xO<ԘKg=&&JƈQ r݃{2~'ާo6^L#=W+Rn1ǃ GWqFE'[̕‰p|{$CLN"^ H L_^c/6(" Xno:0dH 'q|$8Zaб%0O]J[ʛo *X>qDY4Y-mT!}RҌm`3WמN6@^=AO|"3b`? ;?-0|ggOڡ3uyP 3qWǯ fPS jN$H!1' ~1s cCP{薍86dxHȕO_teCl1r 5֨‰yʨ h;TfZ3X죗Wm=ż?H,K2i u1oJ-GSXz#7 uMz+KxS$>Xwzš0J5D gߏ'R+HƮC4b%D3vF v3'X\юSƺC$AzmuTTBVieq(b>HJ|nY 캙%"Pc\{Wxc& 3%jPGF,$8T)84p.Kyp`0tnaYK=P>ҍFTo/b֫X0$@F-;UF}}&&/iSh㯷)1U@RH[Nӻh<GYx ^U0P~nN|n>$ƕ}QdB!k1B',gdejHpb䇁>I"≯b @axqݗވt6n4ʼ ܞolY;;8Na0)Gz`m{j MSopnCꞶS9WVħVICRg]S'cF!f)&eyoh<7Jpms n孒\{eUo[mӝHƸ?=E:s7לtܷJ'JD^Au]o +8mx[%1j=ѮF)7KPg2>FWn7.%`YMWtR{-]/:écw 7B< K]u zaivqツ(j~ŗu(.Uaz^5So"C⒭z[fPzٻrQMAɕ#Nx^!_lNqǺ\tbw h2wx9}͒ \xc$8k%9WcDI)hxY`G3TC KԞV2 Di'3vL57FKLi:"F]EG9&/Qz&!&.) ӓ@@݁K4n"q=f46$Bfn[gX0@/y_N˱m#K\v!iZE:. . r$x,kNlݼ`b⍧gsfzqr1\=ӫA?\2T*3ݯ [ɭ }מzKb֬ ;n;ץCV (u67moj4NuI2ҕԺ^kBtoc*GIF0N\'PQ͎YzZYOYݙhϢl_%.àӗd43Q1"8=UYReMy=3acRڐ& CAꤡWN]0isﺾn(0M5U:p*Z|_x)e>sRˡ $+4BN}Į5vÝ%w 5zr^*`(E"chI{B403H5._~|ɺa;T3k/LU{Gww\ /g#!Ӹg$_?JɵZUĵlgWҰ1?"fu,*uV^5sF>ek2}OFqi~{^dBDTZAUn>Jo'e^Ԏfi bêH'TD,$:M)e eOA Yؑ=c &kf&_'n'OGɺcY-pf۩R3!{=ÈWߐ*-snbiߌ0Ŏ-4yKdڑZ^݅1Y[< ک/]5Bh-㡁l`vlh9FzXRHTQ z-6E~DuWR۝GuJx6 )9$N#P l7L"pL .zRp*xvmt 4Ft ^: T*xw{Tp Xqvs3-95M%bi,|Rp?wC}` ݇lssDb'蜩D3)pOϱIЋ vc*VNUV⎕$۞ e#f-hg=EBI):MH L~}OYkzΎ +tP6y6?5/7Ϭtl׃L?x3Hv0N ߤ _/hy77<}_]#߹y2uW @SD8&0VkZe|K6 ӿN)sY_Bܲ@E~p7΢xPARJվZ_t/7o߽meQeZcW701 w˕,-vo LAFuD@u Y:Oa5ibhAָ˴s4溱W?㴠ɺk,;T~X˲|c|fiVn _ԃ8\Z]s}Oѿ[ѿ[-FV_{k{ \!$i9O$1sa%o<=(ȾM/GKǜ[Ύ@ޙQb+ߍn] ~~q|,grd' 5D 12y`U1Etd0oŌ9;]]iUMt9JFdX"eH'!xgc}D 9t6Qv'=4(rƑ\N3 fj w4Y9?Ax^hpϮ*ĸ{7uf@Ww#0;ӥus7c[~%&r_|sf/`ig7 NOYF4u $ 6! z1!)=9c'i0GPi yQL kũ)X1-D_}76[Lr(!P $%UVx`{~]O4͝_aTG)ՒQSMEZTeb*/LM,F1נ؀aG# 덋mpXL[?_G 0wա TSO(3 w#Mf:uXftu)qȃ̑!Z T*T(S vHʄD>O2԰'q V8|?-R4 uD$V# 5K`C':clePok>}-p82DsB`~ƮRj~mQ傱_ %F*TIm:`%`4l| F"Z>qڥ*HL0 @F8Ah.|oQp!|U0}pՄ:gPgK74ݲ Jv4 9o>%(ʸcIQ%]90›Iض|o JhɨOJ"Щj.)ZpAwT1UDƛ1yIs 6ܢj5  J*zmjG_vcֆn.C0{%ķz|}ulDO?~G[J0?"WPx"/| au'H {C"N'~dof|'9wSU\|?lH w:嘓snNtuursK-O{8u}?h푦 (0?:,RrqRdp|*_s6 aދ~{2:P oaXSJZ tްxBt4[Gt@g'#T@Ok7^bW^aviQZ* tbo1:6>ެyM4IM# Kpĉ*sDUXi~uG*CBwଜ.EQlWM$%POIyw2*zOhKYv[1ANVhUneJ. {/r\Ơ2$g$_?JɵZU$ tZ=Kmjw9f _ôQ%T}>_5RZ9p7N0aDu/jpDvT! ֈp1 C<<ԓtF^ak@*R4ѕT+ƞQOTKE#,B:ꩶ\!@y V.;lGAxă( MA]zӎ>@r/XJ8#Cdpn,">Tax/EW n $R`ORB{\7yjx! C坛AU&j?Ӯ,o We?S jMDWݼoOF_79. 񉖁@T bWE1ˤ,$sFyX<$|WhU6Hڞ1KPurKxQ` 8!4,23W,y]j5io-@7[W2Ӵ˷϶7iR{6\O w| J/<+%Gi=+~Ħ%=_ox;w[Z]kgnx^%K {5/8s8~ xS{nrW[6yo{hZc"gZ[o2G5YxiEb1rV\mօ}b] 2/o] 2} okiȀ=C>{Ul QL$Uxbob/RRܜ+̑>M2+ L %6ӌ Wܟ!i^nPϫ?Hʘ>tܛޚ 2SwiQݪ}t쯫n.c&t[zgZNj7רBnݹ}ŸJ{{ ^sYeZxIJ _>=6gsηkl{ /ш oK߂B#*2hTGs]zBCߋx#!KEX[0:AבS"Hghs/!ljTX֨-qpqZ2A+1Ãmsw&ؼg) 5%4g9;<=XKD'8Gs1]k jMf7P*z#<)JTهRR̋tRz8Pj5`)F[lENr?k7- Ӝ!RDBp\>=,wOcx 5YְVrgHlJzʝ},=R,d3ohoc,ģ(o_ۚu4xS]P@lݎi+t%.|Nmz=iK??K1|z{^mv=$jA'^p(|=\LF]wzЧER-[`KAޥ_J&Vk]|ktQXDjB8D 09SM?ni4Vʃ蔎DRVzoVLhv!OEݜQvcQ5 Vʃ蔎D2e}kbBs[y-Tm'v+vS0k&U)Mٟ {|^D];T :CH*0C+3^\_!8$<4T0X]孄[F TDp)ɹ%a$5ǂxeVpۚdj E)rwM*T[Ey-"ӷ$[MZ2ڭ9)v;ᢧbBs[y-Ti@s[iʦJy1h1k5nńj.!S !\fBYk̨^"|v" Oiw3"k9vKr%?k:{Ƙ2˕(0-R{TI8Db븽/t zqGP^+k{3C_5:8)VjekKooOWO.>Yc9mc,cp$=8QBf1d*X-Dk2fG_6g-)Dx)r1-Q'r-f-u'"L-Y8Nn. fU /elfYYY/Ve!Yy-Fm=g-ݬ X0u6((Fp7sBD{Ù#D fZ?$(f)ӇD0qYFL0 B6X1AU`+\0kMiMHbCg-v9c6sj@nkւN"/YpFhv!T<"Ry-zƖv#"#jR@t&m<dJknńj.!Z)'lk7=Bd(}83ɔ B8DcznoBuvVͺX? n XO,Cpwv{{ DuO(:>GCxfx7B /m)47'$3u1;_|Z;i\]_b/O/>^z5cG|: ;&>}缡 ]Wc>sPNjqZg)){|,VHJ4LhVP`^Xw`h XI(u2jBTX e܆&_$\^2^|T:t{8шwDUoHQ`ycE9jCQdEG "NΊB ,5ӚN[uKruqxb Vps>vF!/6m3b+rd9U;n|Do,<n駳 o7x<\WW''gGej'}gsyݑ7;k O5(͑$~{v/ ̢Op~~5_0i榫{ɦxwTͿtNh14` (0̱E | vj(=-ma,~-TL[캋SsfV!|gP'PLx~6ݭ(7_L>פ|{sEju$OJ˛?{"㣳6zBޫ."ꄄ\@y":HR'^u= r1V-}"rQ'LmuJe uJBw+C{cNI=:yG}RhF'N%Nll0:"L՛u@Nluheo~:%A-tJ]q44: zrp!\7DZ 0[]Nr̒Bך_S3mQ/.XL:A35co&]RLUb%.A> =ڗXïoǏ1Jh[6 IPx [&%_¬.Rr?-] ~~Yqk0qeTwSǃTQut%钶;57̏t&???XRF LYExĜH/ b9,a $ P7YQ  eJΗsA Fƺ]wN #5Fc{/Ll@ho V}fmE&I??d5۩[eÛIC v# ("zu.bROy!産X"!'8Q=c\ k I+gm+~ebښ]DN:bC "0L+<\OT L $;³U  ^ n7 ʯvʆ`tBa|:O ĄR9'ɸOX1jWqmu#Ppk'tO@vt{bjjH-ﳟoJLVe0U.B%E/.Oa?Vj'a/*:F9L1aQSrXQ/p OdEMfZ uT E 5HkƔ)xD^fRe=;pҁ^ֵbm]x)l >Ń[Eha94_äg=Lzgh6r'=yg"Eb(X:)k rl-c4TuLDҁ\::d& ] KdrdU'*juUi*7ene4's܇8|V4JbT]S#9W\虒&Oݽs14q^b/U6P6.r=0,U//I 8 4rS JmW;[2VX$N@STk퇫OH#8]k5kxYT?37Zx`WEԉqjI>YBK[(5uiZP#3$0*: TI!T.Q,HYۼ;2N,5Z>>BQ01xs{Q<~m1yü4!o$bk!|b'3ZHnwܡ5RBE^PFp`9coN i`깦,pƯ"qyL+䧔j-eR5Ǧ-6lG0;w<l~P]ٚkf/[ռ*uVYJ`]\k~wk!Aw{jKGޒq 8I5!uQDij8L+##˨5Rgdڱ7%әvTB|iڴDIzTCT:J^rD-A+?Q: aKB^/3YgЎB)Ucڵ)BpL BGN *0ԧn*>JoSF˦г`id rYο,px@oׯ{OPԼҞH4@8C!y*90[R=1bPP h7VJ`4A*%ӂS,}6r&YF2esi3[^=:sS99&zF #xRPk (pdXX%ۄ]T\GWt@X\OFa;+J7t̬Aq{s7- bG&K-MM=V["Gcu1_O&@dR̪]nXފzf7Νqy9SDOVf# iyqviP}෹b(W/XdP7w-!kmwvkEޕcѺ.hct'ϱ\44b(y# 3Ttr~+T  s=Pij/9>=(JS7@ʼn'ƨp}Wjۂڐǥ]ѴE'\ p^@) MOtEqNOS: hxdpB#BK5GԘƻ#Y4Zb2KPk-%uiiI&vZi ΘʐR2*io@3'(Yz!o8uN%ڻNX YSWVVy-RŠ|R^% rcHcsK.L>:߂6.폋-2?s~v._A^j)Ʉ0rjAIG1ɬbJs,H,1Ax0gJbtЅj(qYtee'kAdq +%"4JJKN#@)d<vDXYeمTO%Ar ǬCtT^I\1NPR8J{bi ]IK‰:f@%ߛeIݦؙ]Ĉ:yd5 FãNBy.Hd)'Ph񪌺dKH(J_ܲ.lTB6ZxøAH_:jo2P6/] C"xJua#N\ SN4ܛ?NӦo 77_mOyBjK'T姿p? ?29Z?O??KUC|G?_\F|CFD˃sQ5c8daкn*Մ/>%N&vvׅv8Zqju84ҩՂ?M8T"kpɾsE)cbu8vdQ cv͘hb&[ :$ y JTtJʍCaL`Ɵ 64׺ Z4FjN43-Pa2>vݘLtE9@Nꍅ֠M˜[CC#} ] 0U Q t :UUKҊ~ T#?o\!p ͪ ~ǭ\71 c0>$G7.j<:oe^E\%:h\K)x($ qu2U)'s68b=ƜpJdMHH*kt:Q4mu:ɪFRo4)-;w 3z0Jt֞dQcM2z +&'DlULIBn)S5Qc  מJ]aos묆!꼮4XFe2 BFQd $R3A=( G,`YtH?z륍}o .<6@1BC $2ER` 6`hʥRcd(IJ 9QB<QG~qٳSՐb"'RL Bs(Wk4!LlW 1dB'sv͘Iʙ9^R ߡ`˹AL:2$MKtۘY#yJQ6GuLV(IDUjG˾Y{8&tCG ZW8ᫎGqJN'J(2|=+eIOb0+o'!<.&}wZ;")HcQoOҿnc-^Kkl(@&Z^2%e އϿQf%H c'Z7Q֠SE8gѻ/SFOP@Lm(RJr9[T#@FJva'3{lO.4w_wa'wd7lz k]-fEacĖ[B\vB\k21̂aw1ɤ^!)Mr5=^j=xcrg{{ہ Va_|<_ gK@ٮt7P=L=>Zfth[(}Jψ*Y!mťFoi0/`Fq6XGݛߕ2cu<ܻ($}jv `yTUU " SHH+֙ 9ZvfXF·Qx ȽGk-ɰ{q#zh{G}e45k7hy }BX j;_J nЙva0`=%&ݜ~u'M8~iv; NSK 4$: +Fw4uJUzg:K'NSB4%NSBᴞPw" ȄF{h%</g?P&|I1;KuSgo/ ^b}ʩTm+wA: &JP eIUIfuGqz)IHIZb8:&:O&x"!X9COIZ 94tS_Їo yBdu"% Z7"WD+E?$sE "BV9"FIrȥucΫW2*']"&FFA*{M@BeRp9P|x6/M&Ѧ`5yI-NZֱܫg:5Q6֚@ 鵁hx{ܰ4u( How$Wz#Sع ٸ#v$cT Zt!Eqn{G^Lz9W,#*S{(&eI|p|rm8+Y JJe_-\(۞0&in6]R(kD?Уչ-هo"E.,~"tyɝ\oKO=SRėgŗ_ЅUep=>% >P9YEd\੏JE<. *")5'lx]Yo#9+B,t> YӬA2eu˲W .JVZ;U0*I2qnM0Tփ@JѝB;NpWpzLSahtѧscZWR0{gI7S&LV\&ѹLZW2 ' /bg$dҸ??`ۯ_mL`\\ dU5tbe>S+á_Eˣ[ꦚ8:u/Q8a',?T<']E 9naq[FJ?~Yd#X'{!9%5sl:FNگ5XZ^Y\\`HK)ĉgC%%+]€껪mͅS{bϞ9sV3WqOB;M''@A9ȏ#?DkeXpxŷ/UX*2N<عUgZi)W]QUq/re]F)]̈׷Dr!tQ3$|PFpq3@K&(‰wX+DTGi>=O)Yr$#r1Ɉ5 5Bj3]Xl Iea8)IpD@Ò %J)qhA2x$+rӛbc G#}ֶMER[|B hʑ\v+悲ʵ⬢ʵ~Nk%e+c,cc$*ھSXp8c:)Nw%iJ@sRPׯS09Jl/|Sjŀ33 (G}g\Hh6, :@Y8x 3~]ofl5? /`n^O_xOݮ=Xҕҕ1IuI:1O1V$}KҮzzHRݦt}ѹĜ>kYzJX=mJd{QPF0#)}1ݯiFP98od96l'R;(I+m aҪH~i0&n}({0=0t1ˆ;A PSFq(knu wT"y([9UA[OU`-2i@* Fj$4#?kk9%ޤ51Km{8v'H0`xO6u^zՈg 1SH,2ˉ^\.jn]F.)5 qnhx@O9jƑ랟sӈGwcS2pQg1`pO>Sd,zMZ)a@z |N1"s 9ʋ{7X 0CГ@e%qcK<ϩ ;T[R,T@dsҐKD9y.ޛ_It:NZy>X" v(iB)aMKo!gDX719bhz1wCIS#$b ;tԉRƈ38[.@~Ck5CvJ56t{PbPM!=b!X&r! ӗfo^aag!Ci:ȧ/4G:=G%+=Y' g"$'!p>VGt#h#?ӑ?sE\Zr]קh|ϞV VԵVo|6VoQ3IٌvyHEdLri_ Yh"R0Kk\HGU11Jb<$x3R 3VX-V,_PfDQ$#? N aUЁeRG"p7 E+ "Gт,7L1uBj!Ñ[,Õ"eD?L(eLii3ӹTa􅜃G\1;rĠ&ʻIŠZs G 05(\1k9"~,?\$ã jm.0׆"cqT(xKi}  )mS3Op_V?LM^Aab3V, Y!)Y;0z qЙu3շQ[d16mBAip(d8*Ÿ9=p%JN!WR)GYݺ~`sflXk@֘5ϱ%,.Q ! ~GXH/mÿ,FJ c]n=479c ,,osd}x?aV4_>6yՌƦۜYFDS`K4/ZɦpUO̝2[C/ 'fNAW]{K2u &Y?/T֏_u=v G8>uk~mzp0:zVQ%{w`srE1\ӷ[hN|T\ݭʓڼעM m~LDLd]tIUszzPaaN;?[M8Jy,8n7޷NWqtCv= yFapy}R33Jp}Cg. %)N&F{A1jۖR`?BT6iV!\7|sI_ [qޫeOM\ȇNV%&D:/SY@.'-Wӓ~ClWO8rp2q%;ncvz!xthłI=jE^iկY/ͳD .bn3g淧3ö蓺41=[{_4RLJgS`ܯj p\@OdճaSy=e)&AxCA=ÓW篝/姧 0C9,~BumBafS[͟7$*``KIjosק 3pz4?Uw{4ܺ?7e g/})C,zQ^1NXn ~GAd𻃨1ߥ~[f@ 4 Li,أ\'_'|1-홓\8`E(YvG4EWowl6uG!dտCji@}Kz16U#%]RV)e9M\6m&p{2"a Q4oәqN8S!z4pHKm' R)F&xn(dG-|xtxb*<4CPEWS-GMF& j%1DMt 6t>8y_~f~׭0\ Ƴs4փyy](ũ-gXN?ثڅuʉ"=oI)VmkMJ([4;MN8NK oYW幕l X+#,HuJ$2vk?)Эt$U֩BOp\. xL(4ZU ڏyh'bt0-^MȪyZmvBQY#QGgRZ#TqWH,Yv"|td%+&K_E٧n2N5Нma;6;R0RF){+97~H 00vܰhUٸ f c4|ܭs $ _ ~r߇!g}3|?e^1"p!9 LPl! *<*w+n1zkק_V+`/W`ֶOQ_3 Л|̓۔}6>|0;o< U>J7KgfQ2RfzǮ4*5[_̾ ucEޕj*$/ K TEN(~A$Nù%NÖn^ mwy-Ĺx6#ucK$"4ޑ@ťȆl86xM " rѐ+tʛ~L2"AjV yO]G҇$+WKvl5(Kh*Kp|\-*iP)u*t0?¬_eU[bw$Nݦr*i6uBO-!Ȁ;K4 ٻ6,W>M0iuʀl'3 d_66geQKRN=ERHMV7[ 6`ΥέqPD!R!>em3wz$E4;ysIT?`@g8} McOBa{kAr@Fj4(,=vGCBIXsh-kAvmy@Tz(Z}{>Yl/18xhmB點Y588'[ǰ:}h cz<}pj!Ӿ&`+m32(\PΞ*dߟjXɘ'O }[nO :$cs>'jHkq'PBZ%)ەuqw"'WT"ZxW\>|Iv!d=z T Ѝz5DEhϪd=]].\Wl'b031-'<*]5yz0tDa 4&],l06\O'LҷZ׋' t36\׈BK0j#+3p'0\_bXvuJFGD6[ϵ{~K9zq^}^ B%൹'H`t1Zx.(HTz}-؊ ѷMB7Д9 x;я8fe}i{?$0p|hNO> ^<]#On|u<טҝI14[kXc X(}B^eZ&b7u'/?}7zk*ֺ]jM";+@%|Hdh N69%yHpgY̘$>[B"m!on$:}¿/o@ش{\xM6Y{ӗݐvݯ zh;}mW6?? nUK4^7oF7w5g@e"|W8zui'_53/j.]LoOT)z^[6_.2Y@Sǭ @ƚdSXkZ:P ?y[ڋA0&N )I $b.uNh L)q!`^ Э=C:\$`{U4WD h=CM E# aZے,xbe5:$(8T{ƃkE)aI ֡\3b2Z`60<" A`BGc&EN9Sݨ2&:C1bK* 1ފ LIE47z%w+<9IV8@Tآ}̯Z_ٷ{ʾ-^](Z_\_0+^S^D&X6|Cdm,еPiSCǯ.*OFU~vD4nD)|rfU>DTT{|q ZKw A/fgؚ*{3\uR"J $,]Vps7 ]#A* TzA4Á1Um=j$81pSnm|L1נ}F1-~MZ}Tk]a76n}acڗ1}TPS6.fR6DrḼØZېSz!g7xhVYl H=VD1U,?t薃`p- W8kdPhi"t_tfhz-њZ5qz hzh[;68?Z>R^(\aȒQ$eјJ'ôW2Ѹx2aPtDH^E.'D I8 wjcA2KVtKTYTyASz$zӒ:?pc_#c^j.Elvf%KO O ?ԉKEHgh'w6B(i<42J,U4OH6gRȨg9zbT\vޢ9RYR\HgY2)F4%Eo"p760ڀywn)IgpIB6y4#) (r"Ʈ Aۨ~oj#zA5*kςR:psTq`Ԩ=%taE P>87<^_ 㧤uV^F}X+{ըZ.v<8t5.A_tyYqm7pJ4V.t>jMԮ.Q>jcU\ O5Jhi8!rjO3'imUXN>0_s?ru~~u&UIVb ~C޿V&{ ӵJ߱y0lz6D^o} aCT}l'8{E#g"jn\=ʹJ1:Ffۛv+#gB<{vLJ1:F47V^Q!G΢EեVJ$Ȯ1@hVjUC3j-jf{C;0M͖liC;G jffz\.8cf֎!%#.K+AZ8ʁY$:ZR~[> f~uH ߴ(Ζr!d,$Rj 4B ڄ H\}%Kiejy,:DJ(sQ+I3/H7_BDȧ  >_ SX䩯:E r0KfAi[ k.p}Pr oD!N_ەHQ4߻,w d:IB)0&g521n%JIFAzBR!̲7"6F{UMxp*@qpp7(BK O]Pq(7E7\8q\yE,!#1Eyr[{L$&OPџVL9b6$G]kLt8<\u //$0e^C\'#BiWN'/  8טҝITM^(nY#P Ul$56_;!/Ӓy ڣU{N_1CvaΤZ.{GR [*bj愎BVLF-+`{=|m\]UV ' SJ 3IZl~ѓC0{×'\?"?y1x?^= 5+?_~~5E_,Wte#Bhι]P̯|hlH')"ͷ'OL* 7~/(=Likq49˼`_٧RaM*9Σn |qasf'aX񗍉s5" `81(^siꪙ;~Fguy &D 5>-EjZm 4ZZD|'O^\==5рxbAb $B"(e5_aH샅Sƴ5N|G`r)VJwЖEE3B^D>- CtčٛRhɣCRHVp ]z+) D!$DŽ^0P@#pNz+hnL8C a _Tȅ; thRD> y0K4]E7j6EO\ yf{}/*!VPV +9(ƼKശ⡘2nƃ#El2[gxF 9]8#`'ȮNor(wM.,`8}N=0;Cr9Y:.W8_.Vjju2_ў>0x_Nޱ_D"e f7OK@7RFK\Oon>,s:}a#ͭ#DXZ6.:ïV<6;Z>z?jPALAi$O`rT BxCkԧdpApL✣eImRMPRqə[5NцنysE-gPѵEx^ p7o5%w v//6Afx4DY98)y V{򮫽ط%-N 5Hf{!PvX\]P}5?{q J/9Ie_T\8Qűpb͕BD4JN鿟H.n,fwxLbTtP߇i\8y YمP!f NXpDN~Kڿ{pGԲbyarq1UMpᾗ`&ۊrj.zGFɶc+X=(kڿ퍅WQ`5!ok} չ{On<`҇h$&wJjdvJ*Zb[h- t'v{)JjdG9{ %1,)\Dc&<ĭKuDL4H@-䉫;䙫czۿ91Պр1P IkᴕF1D{A2qIuZ! "F /;8Ff^{#iL8p0 _t<ŽF7 QNv_q!LLX9w8XJcXZa vpkw @B$ǖ <%tgԹW]?7L 㼕ʦ|%v@L!83|ZY2*9!J+ٿ_-Ecjk/cX0m'.BſMG7;c>UIoaQ; ^UPL* `QaDpVC^ߦ Xy=/\Uoxg%Mʙ<+Zh4+\At B1ظ'ӫWMt .qE'y"Za\KzR@+ljy/7Fې"x2}%\x(psZWX1SJj;Xkge_e.}*ݬa[a Q1Л{ ,-辁 /ף[` ľ'l#Y BrŽڂ%J4o27'[-{IQ 4.lNdo.`;f"7I?Yϳ$gI>Ϛ7L+̜ḩ "JDߖjQ :TDOl8uB+s Uv{YW3/z1ZdG $X߆|M>p@1R]!SW 96+g+نG:FqM63kJS6-Ȍ(jap)* 2p )(ڑl)J bvd"3,w$S]Ct'3lu [_.9jrIc"W*.W8ud]"#pDʒdl7πjxː2) >omfRx&sc1vH-l;a5y|Џ3*n5O5D&qz{jJ. S-ńZ.$䉋2%5]_퐗u9߰vD$^ֺjo<@NF #(Gis1Wk"Rf#І$9ߊ!|gHdA+sP^Vv.2^3Xss39h|(|LcGEq|$5e+ւ+#*AV-UOJk=ДnlXt{'>"c 3J2a'4JJ.IݟRMiZOYalLԱĹfJ> |P)Fc鋐_c0UkW1A1tMeD6x ή0P(59:M(dTԽch_yvX7BJ\y_̘w_^sFh CL .<0 5EFr'DO;Y!khǓ-8s"FcPR*28sdfA3^1wp,)YR24LOQF' bDHboǞ6F#Er4{d鋊L|M:2Lb<\T&cʀ!t2"W*Hq.7-A 0.z`lw\;i*U,?AU%f[݅zll28OZtW*i*TIB08ֺˤASh 3Gb:0Ç2ń6X@8}2fsa=5ʒ6NJ)sd0V'E,ҡDt"\/[+%|]2G7b>(x>D 0^ARv& s% Ս辧 St,'XX>Hz4j/'fmˉYY|oGZUGo>G-ht] D%e_=۬vcڑ/=z6.sֲmDu~XSQH'S {r[Jf8>,$ݹC=OiV[Ja*{%E0Q2'c<1&4X]N{ BS\o!I`kj[\^\,:̺YD88x́Ų*FKqjqށZROJ)8{Zp]@޽ˏ.UT1IE6D)Kv.][(2Lnj<cBz<=͂v\1|޽u>$͛u2%(dw$=yIDPeϽ{'c|3b9Q>T/̟KTiޘPp޽;e8Kp,)YS{k)%JA"OtHRe( L#G(RILJR"g{J_TԻW5P2ƄD1sׯh!@cGDY1,*R,>p!b rD jA@K8-9NVΜ0 LAnc"m@1 qHO)${%A[Xt4kF2RC Dy"k(uV)e1똊R|pkLAqx`c|ٓb [f 8cҢ7G-;O{l6z @cnKTWNz46TqRHޔ+ȖR tD"*l+C0c!h}X'ُ?>j|G9n;{~ɧYw*%I!d PT>="$5aUϜqƢqANO=OOvAd_/} ):SX2P=*`0!,f^EhGpRŽZ 8Tf;[N1zxRm@OH*gZ<=8ۗP6R]rpP xRoDUaxR+g8k6nYcD;6Ւ9s κ)?ʏVA""Q#H=#%1Dv&灀y_bmǂYfӳ ZQ+?*Nï7*Vdm [:δP FqX!Q;%epZ2](]Œ-b6pv]/ܒӍ 1} +8 & 3H4!DrLg7ϧvZ3poNqln~ns9oW/͔jI…Ct -3ڑD?nj[f%Iu.GJu3f簰?=u;'=Z!vo>qy8YIW" 89Ւ߄YJ0TΌl^Hb`B+KF*Onݎ|0K= -))p0w՝27s殊V +kXhI[m|c=jX`O,{{G@Ю~ ׄsZʤʨO;]YLELIojm6YAƂ ]-ZkNk\!{=NaCh e+Z٧(e_'eFDj?yCz3x "G's?()Z9&sa"E~80/<,bGo_V[6A"a3:laV7%#!qc,2xsP= odJ{J=j*RB&nn?{Wȍ/` 0a;$$/{jkcˎ%g&9d-YFb&bSdu:vNbj6u-60GhG|e30n6jay%nu^R19|_$)!{U(TL2RMhcńJ|rJ zt!ǢO,Fd4Һ NӈRSM<[Lڕ22)ŎV #?͈_6Qcu$NR$NE[4PZ!!B`$y­ 2o,£ 0|+Bpʐ}f@Eh~uhu$Ir&uPPDXʃL9-.W0>I4khnQ:Pue^5 N^jEhszk;gV3i9m6TR#ih)] y"-9Go! l!?_L b |&c3ygup2jBp2הvu8[n3y tfQ~T rm}DZ;65Ss!kD}8c2@9E!Щ25:\'^.xSd WLmS9U9J6D ">RG=aV JJ }c]$T4!{BY)ºO%oT*y9G.E*enTmj4gc6p@ )0x`mYGV:mC1Ǫvw.` Xg9p9Z[$Y$Y$YnS-&1R1,pTrV u12a0Q{Wi1IV@EZoY4i5y(H 1@UЖXlCJlJ|"6倏G|b _P}1BXt 9 ^-ХbXKYT ˻xpk择˄eNUC$Mj2aUp,Ac&j"F$UÝMscPnDB)u"Y@ ?xxktبƤbJJ(+o788 \3:m c wHB5% Z7NΆ8,>X(-uϬR)q- sq!M1Gi,N)x;3+}T{X{gC1\ ҙ2;p< y :(Ԃ' -t3V*`6t8)o?v6jiW guy2hއ>m2xtAccn 6^vwwOSh]DgmpZÂe E`͝MY:,fIb&,^VE(#K XZ/FJ>LHZm" 첪ҁʪJ?z3Ձ@J^^ l4i F6|? |W ?iЩr4/VN쾡8St’,h1 Q&3c$A%r)DpqD0c(v0c„Q(]\Fך5&*JhjsHh0̋nw=s03A R()+mP z I ddžƈ+`\ yن jZo.K9MXEzKHo10׺LE8 S&Ꞻ2 PvwYV;CM huQ㰞w{kAw"ɂ8Ҏw*]8!;%G M% i#C1V=;0e׍w*^t,@㭐bMi=)<$G%0k jm9ͼ1@ I 3+0 b%vS(BML寭oTHs0ȓxLl1ڜ,}(cS BX9r,$b M(c4,gqPa(dL[f<6H#X7<*+y]Th;4L'%(jYۚRz*tln'F( EY߮%+FO(MzhH CK2wӇEiiN{4% #>P("xOLޕ!@R}KlpL,ѱyXX,+1^r<Ӯy8i)T%9җ4H0!$£)+Iy+m"}txԜ.R3p }Iu:>e& Qj{gnPF `7?[C_u*󴸹.CC_$μZeYE:{uv*y6Nga/RB !ܠ%y7eqh>p{=f)n p0Q}2YNaZ p01O_> *3G×a.2.l3`{@R2[ښv n vq#|yX:ty/2 lHLha' Ԣ '3`ak +E x,z}`k _ѱ  yZLѾvH qJ67贂F4(8({˭Nz 6y(rj"D\di? fZN^Bvuް^#stϑRq &@H3lg3T\d!m7G]6lv !բ77@QNbXNo8? ]C3l ÝSȩ&;ߥJɊb#fz(ɜ,e!4AE}h]7[ ZZ-ֶҿYvS2_.=|bhYGs_6{cmUΌmFpf;xmy}#7LcyS 8!y@ Q~ ÛToqIbXE h@`TygK=ccJka9g=7}t`9Z丫a~%$tJY5cwQ8R?!I L1#_ǒH\ui:%0q۽C`)qT#(RVl )JtPiuŸ5uFxiLc'zS Q"JmYe #܅8D"ȥ&m|Ѫ A"yz_;soGz+4 D %AbF\.Hs1@}o^#jYOy率"5j}&bDuQT}~RH/asSq0;̋9c9gI5iƫZU( p.}f]x;4ZzӠI?+S^b$yIx~{WLRsI)c9#\;)xÐV %#)_:}br(5g1Mw؋4FA֚f$gHӶ搕D0T5'鳹{]rK`l7"k.do.z[Au,zdV\+1m>EAm Y 2[-&Rk z)lʧ,{Jx:`u`0jHtr3:?-#4Jc_u~M;A+7SS$hF P2dZ9R{E9XHe\{~a32T_ 5rM(M!+]VD_]_'LwPeeOac 9[-\i&@Yۚ:u~)g4O갆Zz$GLE-0k#ӇvGPךunA.{uA2RU`\!-) V;f;?p|0.gncM/C}!u.xm-BS:0 usXw(.iC3 .SCQA9'bF n /1^#hDrT Y01nhZ'hd`cLmXr1%G,#u,.zs Y%F (l.wq8``"N$˃g39A-wT'Z(\*!"l(Ց?>cn[C Ӷ@-7^-?Osw' 2C)FA (' UpVs md &>34Rro0}A{>fe:#]+dS @%oգ"E}QiM+Scf.㒑Ld*ѼF˥v*k+kޘc^N=$3JcgJל?v_7CT*@ta8 V:W*I;m9im%?Kw>iPF[D0C,oS>Xm75IM\1H8Kq*BZ_Y-X%[2;;1@MZZNP@}tSyK4DC*hc||EYeeFL()_=ζD'>A}&46#MfwKf2"vv1Ĉ~XE: ث{-h>g˱:}8/a^[P;P(K8!f6֐@s)8[BP)4H[ik`(VLZCu[!=3^`u34ΐ/C855)Hj8*$!͐2f"\oD8萣qA ^E 65SX$v 6Hl1|%Q|{sblK&Ug4™T&dwX_-  XJDd])S)F :s;ȪlQT8HWD\juC_6} K "<+cJRdLPU82Xd޼ЂjGЧn/=Rȏh++zYC9Cłu,,")Njt +:r:$Rm%K0IUf Z 3Fht kf{9s9d AQCJ5+ Ά+Je+.b﮹Tg3N;U2Cq`͇.Na`3NL." O` c)xx=qb.>y6ǩ4ДQh1o0$Ơ䎪6J 驎 :U:'4( 0(ɩpPL0 hlNQd<51DIIPPe\ṉ̀&S*+ [njxv=G".J,o+0nF{f.Y?~Nl_a;K͝ǖgcE"tvEɮA{J;CT| !fgm_-s=qF]zKh](MˎOv !^unǥSɈ֒oP2yBt5Q_(S [n_A۴6\@~b3{zb/U`z݂"l>?=?g&s\뻻]ofWwP|~O{3'p;$r s۟ۇGQiRP됅 N8N =7\eZ9^4Q޸5HՐ2!y;-ˮJL2O'[눘y}%\KZm/^'>n4ޗ{0Upha,18MrB'x5[͟1ʵ 0]zcF1$lh5ׇ44񔁒60m ~]`5`\؀&AmsA5A$ eAB2uA ZB: g=R? LoHH:Fș;=B@ѫzj׋N_vYR< >c Ց?; |jCQG֨ROZ0dpBw?MMk*pڇ d#~]sSnqohUbոpynE"3Gd}Oj2]ݖf&Ɛ1kDRŐȍDZg^VIaoD_r DT"ŤeRh#AY rS jtejdzAɴlJ S˓kcLuY`RxUWЬ'MvZ Ժ#ޜ>CKAHH2'/aZI%<︤fQ"lA1`԰ S)^*#5JjޡCTj%fF*DW3SDJ%(!<p D2()Cs6Ͻ3)%:xQ)}Y[)NiR ( )QVy_`#*XlS vS2cQZmֲV/*j_- MQ՚s;r@X tDNQJz eŭ/,=O2Ȧ ѣD:8uo:r'8́tJD`O)1*AԺŎGJc9n.=__6$k ֽ7OOjm@XVRTJo7l/!h66xy^|xSvӟ 3ǯߡp],7n53pjhnzY-?_iyW)uJ߃6էhgKJO¥Mf$4У5?Bo^ ov^ ̯?Q`=R .ޯ4QS&Y6_o(*Q#&q|~bt=ݨІ3'F!Qn!H9eS™&W]:wF^n d1F7XCHt=Ps}'ҚiHmq]kve}CKgO0 }γK!^Nˑ(N^9xD;YӮkv2wB-Iʛy VsjC(j$Qy(J/Gi&CUC=!=Eh5yUIGX1:82oB֡耚Zb7!3g^[#5*BLx0s5GS|ԣ1+M6ۗDM IH0Պ OT(Lni̫VC$0J!hjc9c)ôSf4V :Қ oCxoWCG(ȫ^q8T8M`$fOM'Ͳv<X_ozfb MveJ;7aG:L`D6+C̤,xzPy JԊNxqMrqֆdF;n'\,VM {ZǼ/û_[r,~=T_ͭD+I,4Р$zڳ\FU?^ٓ U +m/vM4Ԥ9F qUIm%2C+5:5sGrVU|]T s :!9 +*F uH"9 TR̴~T݌;B?jlc^ۛyT"_wcfccy n3 xYVKA7$p.i 0Z\+ *<X3^mYykӕuR|zAcKvr#6n5X/?~Z~h R'pmՆ>*)q~|@&?lj<zB.s:uT %ƬEbv]7mt珫[(I?\x Bd(i~E)Pr&@͎́Z+׽7Us"|}rL4f.‡X/W*#PCiUN9j[nI[ %9~vx 2x u~CyǙȮzcK r݁nάJM|;okṟ̏hOD25 =/o_hΎ/gĹE{8B[88"TӏMgKq=UR֙,Laޭ f7h͝j!%`5OJDCQ"RʼӞFd&uA&_B\Jǹ5Y})[`YFtƱ֧Nqt*-5~yL *HTbnJKɗh fVX}=564,5o?^tJ͠K#Jx%+@֓P\1kBNm"5c7-qj:N.EmKL{-Edž&lP_RWg|L:7uŅ$g[@E=FL ʎz>K Z b흣ї O? "ƈj%}_>B3o٩:@ePkaj4E y14xHk57;5K}Sk >)]0aKi58%C^d˚"wx-HgSc蘗*07'i dwCa oAg&(:65gCI @%R%GjľfQ⛨0hSx5 UVQ!g\%\BdXF>)ȐvTBHVq˰\ԭ;[5qשּAe'':M\Q3NR BkY+s0-hyTX%S^,3||ۚZ\7Z ߲~,z]0 % ,Qg2vR " 2R=1) ZmF˜="&i%A#Ehp@kZ[6r(54L4aN(|Z)-!PSZp`2&V <5/ð{Bb B !tSD*lGȹ=EF^Ĩ{g\o> SYjysS}yu"qP NHI@%Qjz $~ G'm46EyUt 9t1r%/􁭻} rw2x *KAITJI.$P$6z48v׼ @/&Xs/㾪;NXH kB<^c yҟgu}_ ]p-Fx2{$ =۝(Ǿ RjmK)@:#D4p ό FؗsTmپP)E4fFʇ":Be^cd4'> uۙ)ǎX}H#F:`-3|7G v ӣp6SM]x5 ށݜ@fXr&3߯زr$MbK$~Ū"YbOo 705~nj4vhA01W(s?F$x+ (ir%[Tܽ !9xOw(M yka4tJ([Rõc}R&Qwen~$_b_/w7cU3D1ޱwn>vAeIzwEW.#Leeoӏo/&a~\=@b~;p'WofJ 8_LvCFJV@Bp THTyá"YuBhwCS)| x<憷ucJpQd(tQla@A,ihDj"-Zlmr3^K./ۺoJb[ZZdw714sbvlG'OZ0ۖÏL=~%*呩x|N¡=L#Sfn.PnG&}F`w&Ex f*. "#*6LjWYY7W)5IXdzw08ةޙb:=lܺ/Ux ,|Aa0b4CG׹W$g'eObfޕ \6ؙ_&XBFd!#L_v!hJ`C:Rz9:- OGe<xl<~;6pf:/Y0jCOeGSN`k@HA x @ `ǃu7IHϩrぺ%a0'ʛN$S c)5˦@)TH,J,J,J*՚J(lac(GCtӅxR7RPp\KvV5N%dB9 ys`N テow+r#LUodg  k=.D:o$0jq 5:fpWA kԻ]*-A* ۛE?Mթo6_{!br}Z%9ݗǽ#uL"o2k eL?Oo~Tr_kթ(gIe1_.&̼NqE)J&Rh-5lGP)%&? dG}g" Ztd7+nxxq[Rۯ ̵$IƁU 7vPwMQ sCcth 9%E:$CkL>$y~ \4Q{!q6q~}С\w^,j|~fPZ񆤖ߒTr>눇ϓo&_>Sbm8ۻOX&Z7_Whb56|{:!L)aKhg^ŋPUGY.1wꗷes.;0AFs-b +B.A(<5Ahd[eElk&iǪh|l9xC 7:lHd,lh!I:/<5DAS(O<~a@2gQQc$J^|;{5h#Q]Jb'\QsEs6)z#<c(ךPeD;-77777ߓ rPX%%"Ͻ"܃#oe'"*IBaE^D*`!f8b쭷Oj&Y-y]$IWΚ{ʺ IXYr^O X8/ذNJIn ~ƺWon~uv'VM-PX+z\wr!EK⥅Ak8^AO*qp=ɴS$˂8E`ld,G+n^ʜIlzBYZ{xiݞokGZ\YyŤ>Sx|ʅ@)&jEM!b;`N<ӚJc@v2r k+rhԬ6MSTwٽP^ۯسךH׮/g \SI 8$=SͿ )pJLʳ"oMpx'D/߈JivPnm;Zcw)Q=dJzJ Y8(|&, fJI=VV'n|z#Pufx֩N33,˞K]lY؉7,H%5! XStFϹALd"2bU.iY ddg hm !/{{8k$eK]Yb̙9D˧8?(:KZbފAjo#*}E2rv^ xKR@QMzMS4 1(2d0B(Ln12F')xB~Xw,dA ks:e('T4=EދS*ވ"374+Q5!B&Eo^mSߵ K46BxQH&JR]+b PQaTb-η(\'X畟,eSp4ϰ]ʎ]5]p @@}tZ?ZYEǷÐвsj\f!JJ0 T@._R#@@ Պ@(66DSo|iHԧ0堹y<ЗnK([Kca0/?dMZWZ竷(oHMΩDз^NIɸQ1$گ.ai[FHv)9#"lXՇ[c6ZIԧFըOOed^FZä&N= I*`?4%j\^/?*_;{iQ_d~yw7p - w~iFX́5v]0T7nLD0"PޖWU7(knwC[Ml}YYYYU>4X1Zr(52;)5ޤ2-צ|䊼̈́&1i( ĘP\7okDZ*D 8:@%F}ϫdE[C/$f67NuՒC ,5fV+sp7c$9he(FO|Gͳ~}~e5KzWxNN.ۢ1~6[ǦnLFEYFo|rQSg#M>po7kX0)_&پm6o/prDoɇug<3¥l( =%Vr!%GKQ FCvē<0 J Ȝ Ϫkҝ,Igmp5y:_\R(V0KGř=#כ8:ojV'Stk_z7|5lyL яc9PYWƺ&] Ps|i o?%iT@,,]6VT~fKQ)3Z(OI G[5IwIp݌634h (N78;igkҡdSy7NƕrSoQ203P8¾ #;1z^7Wng&\$T_yHFEUohڇK RA5Fٓd-$c7~5: J#R*+BHt,V}pՉNg/Enhü_mnqW=ʀ-<ZpkpQsujB)GFiC{z{dGOɇLO15嚇7l'ƻOIc:Q(1'儓 诘|-wMKqIzxW~C5A10CM!zF7XL> V'ޢ0ֿͧ=6M]\]?g-{؂'J2g<#(W"EdFE$Ow7٫7g͢7y?M, PMr~*(~/YRBuQÍ%YXt=r\5'ө"T=6fvaӭ^\+w8o_?V rI|)ԄR^^#C8f^RD/0}.RSDDkBÎīeLZ Ӗ3|e0fHIγ;87Ns Jlo?b|Q"62bɝ1!8ȋPkL4vlK+o 9E> 42p aJiMOm&\SD?S bǸZ.ſЋ&+6n1&nhNМ9Cs͊|yFLz!YK4+oG(w^3bB ]TߵGs\a׵[HYϔXYS.*L(5 RIĵd":PePH4)0 Fce, ZRp_ZjlwLeS E{r6E<';QQN`t|؋u> HL;"!AJ?~$QA:e9$qISƧKIfb}a\PhO?/E I$:D% s:V8g f14B͒^ 5 MSmA1+;wCEcEͽv%;r2.x5*q<Yk\Vr))q֢jpAjlF~X/e! pෟn6T 2&D+AJ](0 AS(0=k PED9w444iJ=-x$ M&Z"HI e:PHO"R RkC*6HkN{,3Ef ]ջRFh0_QMzaEʈAR&!ye4mb>q#'f+4sRi4q)r:p_#+YT fufa Κio4BeG b4TKc#Da."ӹv3.Q_%mtTҹD Aŭ5#VT[:i'3N. 6oD &BF ,<δ.\@4r l {Pp1!#т(X@ZhdSՉsN$5!(TT0,ìj*QϬ| ( bL ȵgݯSRf ICOڢuzs9͕e3O, ]0/-Utp~pw|8 v ;yTQ;g?}ƴ~İ++ZÌ7UPti `<:^D.^W+/ÀzAƣUY̞,O mrw]N~Vhɗ0.qLcd /=uI8$4k B9.eLg'h-erpyO.zR-ݼ}K@ eKQ_jxpT [a@3~$p RbNqM$h'|1ϗg~?sԫU Y`&)GBgYTP +4Pv&Ʒث.E+Ʒ--@i8-` Sc&E.uc>~2C-5K`E1㴯qM18(JddOJՑT=qUC85=kPJ :gbF ̮ɇhl#^dFSE"D*z+1J 00 ހi!6A,6ʥ85y1gP9#R%IT2dLS`x z>^3~܋E9Sw܊0-ɛ^ LXAvgl&G?`{ )~j!z!^T0!AU.S&qݠw<ܶ,['Ӭm!r sBK JR"jaeITvyowyokaQjM2 V>dRfKRҨ8rŹց |X]ćյG:()_ύdzWlTYH}6hZ|zݔ?S4OZۋnbNkZ} Q#%Ws`ʨ^Egs3ލȌXhAu=[7[Nf5yp bʹ|帾)gN : -$-'l%ca=]NߟV…VVR.l)ʽ1&x2ܠm&?<ܸ,[d\7n6=}6soze5U|OMo[m̱/=$x18{VGR};Ϭ NjVI7=;3͎fo;>N\>w@o'cۙ(Qz')C] h{ Kv Ba77ڷ%}eovrP82Lk5wU_I&"Y{ߢk͎OYa)7$֊1M6gj%]$̛x+yQ^ə(Tp}PF!xb`󛏟cDMt.;Id͝d5u:ZRx7y3Tw]LA uq [d(jG7 18E$QD}wazq.9棫CTq j^cm5$?48DMr )IE^{̓jy>xrUϊmݖsrr֔4="I;q4Z}==Ԥ- =Kk$mi'''JԞ4!$]VϿ5$yhT7M=Ƀ>C8:>9)h+rԽr ׬>#ca$?7I=5,BQO֒y݉9v%BgΧ] R`vg#*X"g1yGst}0a `ٴ%@mx{P᭜:y۱c)֚=R{f"FIL4byb=6AUJ٩w 3ZJA*KXɑ K *A( ,hA)m Y cMܡXaW(o/6u2VC'zJUeD(Iʽ ɺNkZ3%9Z`'l0`\Q,eKxc7AB)NbhC͙JU;#nAf+<10T]Z8K1n]Jo'?jwA.Fˀ;в,Dd`\QL{f>j9n$?] f8މMOtf7Ħ+.@U,`| U\v2TkV4;>aD0)bqR䰏nUf 0EǬ%QH" CpN{R߉b7D"MS=[1#UKk=YSJw` /,0+):HKڅ# i ,ΨwQD"MbEk !VXCwT3Vb};XiX|P&wU; a6Φ.!w&o?a"*[_ӽy^ot߷(#F_cIyeO¼+фQ1AuؽGe</k2e!~CJOQuE]wYB)â&eoLqJiT=m Pb$JG< P^Z gƽsh`/22R  QRYkB;k2_Y޷2GB^ʡo?V$q5_0V'gT ]B!qA:0i/#0ks%j[j c. sL%5;ٸhO<` Hd5zSP4%G- `(gh'JC ;TE+ݷvk۾A(bOjzxz1R<*?~X<9$ޞ@j#}/ ӷ?ʌil2߱wo2QM#`,TC$n#Fk7'fv6_m`=OF1-q(9⛕=cPEԋSB)f88Tն""2 Ϩf,͑v>}<kXZfQVD{ p%-+A cZH͈$ԢoGg"@M!jʁ\M ޕsN6c]}=#[՜Z[Jo:ޜ&1t0W?E Ӌ8'8*PDž]oF!Q:&^D(z]O]d5~DY'J1Nb`,$iI)1AeI w%.$*|~dqIu9x*}R@zjHcJ/>g5M٩.)Kd0*8^h[)B+870QZr.:H'i'7{ (][X_ 8 6L(Zi"Ѣ}7b'NF|Mv\vGJi-Ӟ (`[B4%75:b1Ew:̩oPBmSΗb"bTPJF ZԪ4aȀ ~ҪPyz|ᕤ И#H )*gEt,2Vb1{;\R Dc8g[E!OYqTmdʗ"l3 YeL&óGCm&YmФcPPObjj77$ƸMH 536$TэP&TSۂfƘ\vېF1)܄JWdw bl E~wIVMMP`_PqQՐB8XqB,(bmir2hI-cr$^k PJBʄ'Ť@Ɯ(9COjj_~Z@QJ8QQ(>Ƥibc#Z-< (%5@,݆lnSkI Lb|U Lo!JvF=?.^~KG3l+?$TR"vQ*F1l|>g@)-l{zYbd^!`Af2~zpo֜ŧ;).VA)#\ޝ1xEw<[/M>e9ԥ죖'XΏqWJqwQy$lfSnjjGdR]H{G LiVaXL? +1L?n|_SkNr@|0 k]-QslgʪÇa/ia{xJH9+D,[5BalA!.}txdƗ$2RIrR {}؉YAi_ĵw-oZvZ&{[ L mj=<PA}'Oq*I_X:v\"gH5c~pe6de 6k0CD,]Q8p*T|Y ɩsEʡ`eM{W`*41QPƩG7'fSG/$>L0GIH5[\/MQ'Bs CZ6!pT:\ p8#nB^z٤X&3JGGt^Y j >y4೽ iVɇ"pooٱXPEϪ8Utga0_LW t?-ˋ Ԙ?R+/>{k{;X\ R9 ^xEwv:|I\zElz;ËW pҖMHzEknL)YKޡ]ǒKpǽu?wwg0)&s9Xie_%6?˝O 2';_z޳Eb$qeQhKo$8nAH֯p *Z3כrI`Zo#&kYsJ+Dt0U/qXqY˓g69Ս 6u2J$Ww=t"izSkҞ%`'e/9!A"6 4J:Z/8\ʳdh!cͽdFn1Jm`'*2< VXE嬤)<pE8}TyrO@(D*ZZnږmXiHEij2qihZ'dƈ`Fl{2zKC*RfRll4KXjA5 *nS!|de&%DG_.rS6TƲ{2^/ cFP׭ӈߵl.~#,O$w.h8q_.Wb C,NhgO'71n̉Is$Ѹ&vUBXNFAXC7eyq?{ȍ`Ky,N 2e^=Ndّ%bK[dݒvͪX5qZKb"GF0$ؤ'2SݮZZGs^S &7)`^:`r/#YW hg[aTB]V[*bx3zZoU%@ufv8dbdOK16&L?vws\c dNsaLzZq b$pOh0\[(P!;Fos1ǜX(acdy긚%Lj 6uca£Dfa>ogeIS;߯R>-?RdWI{`3'>*Y9 ' r4ROF[ޱ!Yc5oQޢ]P&뇈q6Y崙K;$_$@u??Y8-_p8à ha8C~D:X yGՏeW?E 0_!acr蹖\[%Nz)<6„*Jc&FAkdy>Tst1 oTA!4NsX 77;7cKEhR͝]yĘט K6m5M&|g߻_F|15-sQ/"NX2Fv<%jHM&޳VZ%RDCRe0Oy_TTIgJFb 3̥h\q^%zO)kUUI- اA%1R?L$"<|AB||z{4mK4d+էX%V~?\G?l3 pdۺ1ߞ_L٠ Ͽێo#ShL|yKi"{S/}yo! Z)y?{3hcy}pmagg/~sr/Os,t]yS=c(|J6YFݍJ!k.lk"Z)Y`yu[XJ-|s2Cca'_:H~o'7RF+8$_U}yq6"\ί~z`RG, LĔ2# ])Yd&kPN 4d'ubw]YovS‚8P5%fwݭ&wa‚8\܅R%Vlg 8¿d#*6+LX`1,Yoj-7RIJOK[ 4T(h!7(dAq,$4쵳683):m :ZGUͳHV AL; ,Wh"$8V)8e5((s;G|(!-5_V)23"e}Zꯢzedwm[#Yu]/25R $,ݺ^3\iZf4 { Q#(5X {E^g=4&F cJzP6<0(6ipq2^W2o3"~}HV,~B: \} &4_I_ac*.ǰLҞU䨈UM v ;OIo:ځ (UI+Y&OH|uFQ;ˏ8P\ C9dHpH ЧXrw灞@%uݝbr\GȾy6/_xܢ=n!&y ;$d v`K+ F%HrJz|>HIDBsŅK"_͓A?Pt$a;^ki}bZKS(_ZKQ)0/NlhD:mS0bպ*`P˥:JFHM8SInlH1tGvZv ȇI-NO24ږCb bи{bG+#hF҈ԼWCT6̋){\sp*mCf:DKwa[vK_)[iF劘nѲ5ޝeT L^}{([=[/=\|ݷF'}"ʍaCâ D=\{@4?l[D;]Gv#}y ;%u 0@L )V`7F7=ѹ{R.:'ssJw86:>ALi 浽Ȯ?s0; R&(:&tEMU%ik|*Gvո^xֲu^cӢ|ǹO' == *v۫j:`<ԤJSHY O5vKxT^ZKB8ӁuyN7RbFJZ6;( EC:To1vwZ_nH [X&5}ÿZЇUFa6?8\0zp5X;A]A{4C>798s"^%pW/JZ8]{ݼdrQ+ulmG%grFRYP +2藊K\P!!\Ax~vHhxJrd q-, $ /N7Vrt=Er? 2N4υf0Yۛ~AJ+EPt§#1Q "C:)c9U q76yH07EA`[ppP-[ H03PHQԐg.o'PE<}Kߚ|8uX_kvnd:w.{f[,2k›Yb6f)%$Г w\h^ ӐN(\$'pVYP{%QÃYce(i|#ql^@hLIy\~+-x eR':uE}D Qۘ kO߷nHJ~ĠQ[n X LL6Dn"Zo gw defW^c]oAe,p6TG&&f_4Q6Q6Q6Q,-;j|~NE52+IBۂ-Sdy>TӫR;~mhc34ێ9T׫^/_ӫB)iX?UR Wo3\tn ?7dTQ:h9#I\8iZhH+'cGRyyI#%LBP{2MȋY\XAF#`N`husf5Fu%B%VկyD:QUdTߠ:.9{?Pcunf|t߭^z_ԶEiy܀rW˛JV^0}%޼Owwuxv H,94XR #|kM]-fYy)I@(hh͐v#Rk"ğ/h; _6H TyS^p[yQ^/͛}ݜAϛ[3 zHO9{}v %i67 T"LQ=%;? `9,1`3Agw囿l0[ \ͶT.p%Ð_k2)K__-dme6A$s*Ƀ0z$8l?4zgzl(eQ Mufjܬ .RBpt]hPMkSA2SiSWTJgS2˾VmHI Y)= {+LHіwG>G>G>GuoDR^`u%0NoP:$ ݈mz8_!6?qvfa͘|;JRRE$H$5UwuUwWAthH@aKˏSqJחV z0°]bJ OTr!dEեVnkr)x%4k5Hol4HM Jʢd^{nE!zm!PZq A >=i ntC=i OpsX.d syc!ÕY2kUE2_`QqqYkQj+tnY0}eG_&ҡ_>9z28dR>y u۟|2( C9'{Ct&#W1;d$(P/7(BmzTw3~LYm#mtqt'9})"`s,<@KT-*:z@;Q$V j_.,L~0k;&"z[UaՐj&Qs#BH (2NB&.yFk}NL NM( mg+w|WK!p%4S~T#뱍As$=! ku%1r2 gq%i#}#ކlȝ[a\ K//SFBb(9! 9bHfʝ=+B0bBOzF+-,lA{ɛ*8,ShqCIvQDN -!Q8pDPJIzX֞=8kH}I ?bCaRr8>F+ ḦIE`J ndlUu :a-\ہ iuo[-u6bjTBȗ/&wX7_{QPër>#w>epꨫLq @mctSXT"WBYʙxhX- :ޠGeو6z.&+ǓI5O_;l-VgwݭBjL׸b^?;wSxxӝb%V"jd{up¿d)1YJHz{0(L{^63TgқBi0#%(#)2aDII<˛E]Fpߑ!"Gd {L *r Rs-O@ )1F|-긁_ݲyW,JqP"A{9 ^)e6X8#hH/GQŠy"#C\D)1xƆ^Aa `iu@a/3\^r#[O;r#A (+bf` vn;!*,fς B;rbÑ+oe>$oqHRxS"Q0FuV% ,߄l5`H&[̄@`C|jvҐIG`_t.u2C b|S'`-&AlJC A 7l\ .pCʪL$iPBGq/s$$!KCDD0A! C,' 5Gv{YtĐ;M!s;xF2# %{B/`ǹc@Ġ&@i3MGP wld \G\IWp +bpJ/CH֬#ч\qO@@!^cB:r zZ84^SuBh/qǰLD嗋c)b cr II:IحQu<fy L'/oȌL GZF>cj3̉|.+J;Ƴ\LvVkE 7k 3tA?GD9"`5FQ8x%'PǤg4?%?'ya(iJx6*͍RL7Q?d— `$qk(2LFc4zKqX0U. XQ.IzU4 wW|AUkJ-EG>h%'g[+͞RFuZAJ dnIUp.>KGJNH!"=G.*UzQׅ!Iɪm.]!9m\EO#G* *nPNvAK+#kNra)b=}j!WAxz>fDTXi}tdLTNFmV)$b%C'& y DDh$gQDq%WGkQH+kYigZHϩb}L8F?;:z6C討zƜ" 1'd.7j.knxV] oRwӹ;Zraz_G%>bօ"C{w,[X p+0nc[ZzmvV尨4ET2ϻ8_Zz-@FN/_18xo_ Uo8xy>x4}yf>dYi7ϸ7溢Ο 9w$i;sg]fVI.?Ǹ Vwuſym hk!x,B0Bx6Of8etvCP[fOkvNjkxzVK#J(Q|pDKEku||r_Or&4^cwkuԭ:ʖEƥ:nRF䍜=iڪ M }xHw`HIk0Zt呖6V9Avp\,bul~e+Xsݣջ9Y'Iy6:Х״AB6r$@ ÞuB{iz6,*`96)`Q˫}鵯KfQL8𻿺NM. 4o;cwE~{UlcYa%j7D:vϯ^umG0|Y9i[C|y1[O1.0M+}v6²ExQvKmoagˋx9~3TQ݄jvP=78zWGYU5|7UqwScZ6v/WӯMM0!7hb }%%,ģhGBG*Ljoo^k>fa eQN݅I;iRlh%/c|8WE X%JAox"DtzJZEB=IasS!ˎh!Dizu5A qoc}=Im4f˹Ͽ}0s?9 Ybd\Ym9m>ײ8i+a]M`cM[gViP /N/wt ?6iYOOTXD|7)2\1h;nӲ^7RJ^Wd3`:_ (ߨۦΝt~EUpQ6feKDEęI#7ct&D[qةJ޶QkB&5%ZQ4X7.Sڃ|E2Ս/u֞kcg[Z%ɩn )eekرpm`O';Л0r 7/6ڨ ٯNQO:"?{|}G`%Y=ӓnzi5M]F@ΰ: hM1o$d?cq9{Li 9մht>؝/c;_UGw;(ɓ>zȔe!gmo`Ͽ~)xP*01M>\c̷xvQgG95t|0XS" +y ΄@c4WM\XArbKi!Y?*-\*t5mZNo%?-[3ZF8t~  9ݙp@>qk S[iJwXecII1+ D` Jz,g$Kh8,-am!PXRehux=Zw[+1Gylp B"LIhYJ@'akD$ÖbKRRA#Sto*病#.+C_S3z{ ⫵0fbɬZpȔyT 8dV_}̝ߡi9TiwRT>&7wf[;[j\mup!-{ՙJ;[AlEXtp>ZuFNxK0X!lȘT5fbƴp.I3 S fWçN0͒pD4#o'?&[A-b} D-K̗ liXfII&/%RfkrQaڨmKhiaǎVV zJj)&"*  e~ǫ [xXBPp_\ $ 6u| vU|lг Xh`^n=aABlͭjV x=+3,H^fNiԤ/_gz~>΅bkeh'c.d0׃6%vw?۔/~E map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 18 00:09:01 crc kubenswrapper[5121]: body: Feb 18 00:09:01 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:50.970952542 +0000 UTC m=+14.485410317,LastTimestamp:2026-02-18 00:08:50.970952542 +0000 UTC m=+14.485410317,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:01 crc kubenswrapper[5121]: > Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.994947 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea8cd216dd3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:50.971061715 +0000 UTC m=+14.485519480,LastTimestamp:2026-02-18 00:08:50.971061715 +0000 UTC m=+14.485519480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.001032 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2c1903 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 18 00:09:02 crc kubenswrapper[5121]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:09:02 crc kubenswrapper[5121]: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263606531 +0000 UTC m=+15.778064266,LastTimestamp:2026-02-18 00:08:52.263606531 +0000 UTC m=+15.778064266,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.006327 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2d3083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263678083 +0000 UTC m=+15.778135838,LastTimestamp:2026-02-18 00:08:52.263678083 +0000 UTC m=+15.778135838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.016698 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea91a2c1903\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2c1903 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 18 00:09:02 crc kubenswrapper[5121]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:09:02 crc kubenswrapper[5121]: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263606531 +0000 UTC m=+15.778064266,LastTimestamp:2026-02-18 00:08:52.270513655 +0000 UTC m=+15.784971390,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.025047 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea91a2d3083\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2d3083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263678083 +0000 UTC m=+15.778135838,LastTimestamp:2026-02-18 00:08:52.270592037 +0000 UTC m=+15.785049762,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.032993 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952eaa46e9f0d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": EOF Feb 18 00:09:02 crc kubenswrapper[5121]: body: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309212883 +0000 UTC m=+20.823670628,LastTimestamp:2026-02-18 00:08:57.309212883 +0000 UTC m=+20.823670628,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.039403 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaa46eb11e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309286885 +0000 UTC m=+20.823744630,LastTimestamp:2026-02-18 00:08:57.309286885 +0000 UTC m=+20.823744630,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.046280 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952eaa46ec7839 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": EOF Feb 18 00:09:02 crc kubenswrapper[5121]: body: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309378617 +0000 UTC m=+20.823836392,LastTimestamp:2026-02-18 00:08:57.309378617 +0000 UTC m=+20.823836392,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.053497 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaa46eea6fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309521661 +0000 UTC m=+20.823979436,LastTimestamp:2026-02-18 00:08:57.309521661 +0000 UTC m=+20.823979436,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.060741 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952eaa46f47551 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 18 00:09:02 crc kubenswrapper[5121]: body: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309902161 +0000 UTC m=+20.824359936,LastTimestamp:2026-02-18 00:08:57.309902161 +0000 UTC m=+20.824359936,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.065618 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaa46f5e433 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309996083 +0000 UTC m=+20.824453858,LastTimestamp:2026-02-18 00:08:57.309996083 +0000 UTC m=+20.824453858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.070283 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea65937d721\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea65937d721 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.436438817 +0000 UTC m=+3.950896552,LastTimestamp:2026-02-18 00:08:57.417552202 +0000 UTC m=+20.932009977,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.071504 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66ba4c6a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66ba4c6a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.745567912 +0000 UTC m=+4.260025647,LastTimestamp:2026-02-18 00:08:57.662373543 +0000 UTC m=+21.176831288,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.076790 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66c5e8e02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66c5e8e02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.757743106 +0000 UTC m=+4.272200841,LastTimestamp:2026-02-18 00:08:57.678954235 +0000 UTC m=+21.193411980,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.084155 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: I0218 00:09:02.150038 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.150223 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.679053 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680627 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680757 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680780 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680817 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:03 crc kubenswrapper[5121]: E0218 00:09:03.698724 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:04 crc kubenswrapper[5121]: I0218 00:09:04.150459 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.151544 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.562493 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.562918 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.564106 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.564394 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.564644 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:05 crc kubenswrapper[5121]: E0218 00:09:05.565635 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.566487 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:05 crc kubenswrapper[5121]: E0218 00:09:05.567157 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:05 crc kubenswrapper[5121]: E0218 00:09:05.575966 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:05.567078673 +0000 UTC m=+29.081536448,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:06 crc kubenswrapper[5121]: I0218 00:09:06.152321 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:06 crc kubenswrapper[5121]: E0218 00:09:06.700129 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:09:06 crc kubenswrapper[5121]: E0218 00:09:06.838367 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:07 crc kubenswrapper[5121]: I0218 00:09:07.149641 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:07 crc kubenswrapper[5121]: E0218 00:09:07.323519 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:07 crc kubenswrapper[5121]: E0218 00:09:07.640485 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.151172 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.182147 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.420991 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.422024 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.422964 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.423014 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.423027 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.423545 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.423988 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.424268 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.430614 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:08.424226081 +0000 UTC m=+31.938683826,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:09 crc kubenswrapper[5121]: E0218 00:09:09.114741 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:09:09 crc kubenswrapper[5121]: I0218 00:09:09.150421 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.149968 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.699856 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701623 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701752 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:10 crc kubenswrapper[5121]: E0218 00:09:10.718700 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:11 crc kubenswrapper[5121]: I0218 00:09:11.151977 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:12 crc kubenswrapper[5121]: I0218 00:09:12.151816 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:13 crc kubenswrapper[5121]: I0218 00:09:13.150139 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:13 crc kubenswrapper[5121]: E0218 00:09:13.845257 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:14 crc kubenswrapper[5121]: I0218 00:09:14.150790 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:15 crc kubenswrapper[5121]: I0218 00:09:15.153021 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:16 crc kubenswrapper[5121]: I0218 00:09:16.150914 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.149516 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:17 crc kubenswrapper[5121]: E0218 00:09:17.324776 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.719338 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720632 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720746 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720773 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720819 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:17 crc kubenswrapper[5121]: E0218 00:09:17.734431 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:18 crc kubenswrapper[5121]: I0218 00:09:18.148201 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:19 crc kubenswrapper[5121]: I0218 00:09:19.151186 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:20 crc kubenswrapper[5121]: I0218 00:09:20.152064 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:20 crc kubenswrapper[5121]: E0218 00:09:20.852395 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:21 crc kubenswrapper[5121]: I0218 00:09:21.151508 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.148751 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.269990 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.271271 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.271352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.271386 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.272175 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.272711 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.278348 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea65937d721\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea65937d721 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.436438817 +0000 UTC m=+3.950896552,LastTimestamp:2026-02-18 00:09:22.2747512 +0000 UTC m=+45.789208945,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.523175 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66ba4c6a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66ba4c6a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.745567912 +0000 UTC m=+4.260025647,LastTimestamp:2026-02-18 00:09:22.517527877 +0000 UTC m=+46.031985612,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.533413 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66c5e8e02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66c5e8e02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.757743106 +0000 UTC m=+4.272200841,LastTimestamp:2026-02-18 00:09:22.527687607 +0000 UTC m=+46.042145352,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.150986 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.469096 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.505131 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.506107 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.508847 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" exitCode=255 Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.508922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b"} Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.508974 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.509397 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512222 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512269 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.512708 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512967 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.513198 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.519102 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:23.513149033 +0000 UTC m=+47.027606768,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.148966 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.515500 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.734793 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736149 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736162 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736189 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:24 crc kubenswrapper[5121]: E0218 00:09:24.746119 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.150676 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.563028 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.563338 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.565333 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.565388 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.565404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:25 crc kubenswrapper[5121]: E0218 00:09:25.565834 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.566253 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:25 crc kubenswrapper[5121]: E0218 00:09:25.566607 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:25 crc kubenswrapper[5121]: E0218 00:09:25.572329 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:25.566567863 +0000 UTC m=+49.081025598,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:26 crc kubenswrapper[5121]: I0218 00:09:26.151555 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:27 crc kubenswrapper[5121]: I0218 00:09:27.150963 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:27 crc kubenswrapper[5121]: E0218 00:09:27.325803 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:27 crc kubenswrapper[5121]: E0218 00:09:27.860694 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.151494 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.398332 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.420758 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.421063 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.422218 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.422311 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.422338 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.423172 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.424389 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.424807 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.433444 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:28.424743389 +0000 UTC m=+51.939201164,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.918567 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:09:29 crc kubenswrapper[5121]: I0218 00:09:29.149204 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:29 crc kubenswrapper[5121]: E0218 00:09:29.850581 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:09:30 crc kubenswrapper[5121]: I0218 00:09:30.151459 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.151683 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.747004 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748489 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748568 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748586 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748621 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:31 crc kubenswrapper[5121]: E0218 00:09:31.762360 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.152008 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.355290 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.355530 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.356611 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.356728 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.356751 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:32 crc kubenswrapper[5121]: E0218 00:09:32.357891 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:33 crc kubenswrapper[5121]: I0218 00:09:33.149366 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:34 crc kubenswrapper[5121]: I0218 00:09:34.151268 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:34 crc kubenswrapper[5121]: E0218 00:09:34.867569 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:35 crc kubenswrapper[5121]: I0218 00:09:35.148507 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:36 crc kubenswrapper[5121]: I0218 00:09:36.151289 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:37 crc kubenswrapper[5121]: I0218 00:09:37.151172 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:37 crc kubenswrapper[5121]: E0218 00:09:37.326303 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.151344 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.763318 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765012 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765092 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765148 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:38 crc kubenswrapper[5121]: E0218 00:09:38.776861 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:39 crc kubenswrapper[5121]: I0218 00:09:39.149206 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:40 crc kubenswrapper[5121]: I0218 00:09:40.151216 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.148419 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.270145 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.271485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.271558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.271574 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.272062 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.272420 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.272735 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.282705 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:41.272698642 +0000 UTC m=+64.787156387,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.876620 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:42 crc kubenswrapper[5121]: I0218 00:09:42.149506 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:43 crc kubenswrapper[5121]: I0218 00:09:43.151324 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.148624 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.297448 5121 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xm5pj" Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.308015 5121 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xm5pj" Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.372844 5121 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.970971 5121 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.270140 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.271953 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.272024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.272037 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.272531 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.310098 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-20 00:04:44 +0000 UTC" deadline="2026-03-13 19:35:18.125232573 +0000 UTC" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.310174 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="571h25m32.81506488s" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.777593 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779030 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779136 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779339 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.791101 5121 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.791550 5121 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.791591 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796521 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796596 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796679 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796708 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.823437 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837567 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837712 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837741 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837777 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837808 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.850560 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861075 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861133 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861149 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861172 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861185 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.871207 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.879877 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.879965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.879983 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.880007 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.880022 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.896296 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.896439 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.896473 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.997492 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.098691 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.199264 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.300454 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.400937 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.501225 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.601467 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.701574 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.802727 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.903664 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.004138 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.105034 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.205827 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.306283 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.326686 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.406766 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.507024 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.607431 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.707763 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.808281 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.908979 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.010228 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.111108 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.212167 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.313139 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.413468 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.514616 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.615378 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.715934 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.816227 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.916596 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.017723 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.118677 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.219085 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.319499 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.419989 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.521207 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.621862 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.722449 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.823378 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.923883 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.024210 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.125063 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.225517 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.326495 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.427498 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.527699 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.628635 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.729167 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.830101 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.931087 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.032271 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.133431 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.234449 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.334775 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.435931 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.536229 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.637375 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.738420 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.839556 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.940354 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.041099 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.141361 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.242166 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.342702 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.443726 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.544940 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.645686 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.746792 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.847433 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.947788 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.048349 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.149339 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.250488 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.270587 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.271614 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.271692 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.271708 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.272232 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.272529 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.351252 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.452369 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.552730 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.605082 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.606873 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174"} Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.607116 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.608025 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.608193 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.608267 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.609834 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.653348 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.754338 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.854976 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.955409 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.055758 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.156277 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.257038 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.357181 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.457450 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.558025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.659081 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.760200 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.861253 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.962421 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.063146 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.163466 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.264317 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.364808 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.465205 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.565999 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.615511 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.616323 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618162 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" exitCode=255 Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618249 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174"} Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618324 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618615 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.619468 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.619715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.619767 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.622543 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.623107 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.623601 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.666895 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.767113 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.867391 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.968585 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.069330 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.162342 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171880 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171904 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171935 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171954 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.190186 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204368 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204449 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204469 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204498 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204518 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.222789 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234730 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234865 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234884 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.249885 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260532 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260592 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260605 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260622 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260636 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.273618 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.273827 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.273857 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.374835 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.475814 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.576008 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.622848 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.677123 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.778047 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.879806 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.982056 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.082890 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.183062 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.284057 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.327793 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.384528 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.485121 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.586355 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.687282 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.787918 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.888535 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.989379 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.090227 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.190727 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.290888 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.391025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.492181 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.592720 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.693848 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.794078 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.894439 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.994754 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.095765 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.196492 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.296747 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.397234 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.497428 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.598048 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.698851 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.799547 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.900025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.000149 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.100302 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.201295 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.301947 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.403128 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.503324 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.604548 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.705216 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.806087 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.907038 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.008192 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.109354 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.210524 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.311267 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.411931 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.512641 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.613266 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.714349 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.816038 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.916706 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.017885 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.118047 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.219285 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.270217 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.271479 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.271598 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.271718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.272223 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.320333 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.420941 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.521212 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.621316 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.721900 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.822985 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.924211 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.024858 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.125739 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.225919 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.327066 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.427638 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.528473 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.607529 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.607958 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.609514 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.609573 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.609597 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.610209 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.610532 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.610810 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.628731 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.730025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.801669 5121 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.832895 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833220 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833350 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833480 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833614 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:03Z","lastTransitionTime":"2026-02-18T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.874157 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.889484 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936870 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936882 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936913 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:03Z","lastTransitionTime":"2026-02-18T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.990006 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.039857 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.039978 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.040009 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.040042 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.040067 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.089740 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143736 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143809 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143832 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143864 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143892 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.169347 5121 apiserver.go:52] "Watching apiserver" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.180492 5121 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.183048 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tqxjt","openshift-multus/multus-additional-cni-plugins-n2m5r","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-image-registry/node-ca-vsc9f","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-ss65g","openshift-network-operator/iptables-alerter-5jnd7","openshift-multus/multus-9dxsb","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-7tprw","openshift-etcd/etcd-crc","openshift-multus/network-metrics-daemon-mlvtl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.185030 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.186505 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.186699 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.187451 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.187749 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.187815 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.188860 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.189212 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.190542 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.190637 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.192799 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.192932 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.194022 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.195453 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.197151 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.198309 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.198831 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.199301 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.199533 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.200377 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.208389 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.208729 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.208915 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.210996 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.212358 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.215519 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218392 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218467 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218393 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218868 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.219128 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.220066 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.220302 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.223884 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.223967 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.224381 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.224855 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.225694 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.225718 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.225880 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.226225 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.227833 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.228057 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.228402 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.230843 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.232415 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.233769 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.234517 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.236679 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.236951 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.237021 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.237521 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.237846 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.240143 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.240426 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.240982 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.241379 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.242250 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.242713 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.243341 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.244043 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.245100 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.245858 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.246855 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.246938 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.246966 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.247103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.247184 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.262153 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.276216 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.277270 5121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.288908 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.291174 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.305134 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309698 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309771 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309798 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309825 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310353 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310522 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310723 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310918 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311008 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311087 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311204 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311282 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311427 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311543 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311677 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311779 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311885 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312233 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312349 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312548 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312663 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312756 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312836 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312980 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313067 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313460 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313550 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313680 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313761 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313908 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313990 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314072 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314146 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314214 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314286 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314363 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314917 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315061 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315152 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315760 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315869 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316042 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316125 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316197 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316269 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316347 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316421 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316497 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316572 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316664 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316754 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316847 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316923 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316990 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317138 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317210 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317282 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317357 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317432 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317508 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317579 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317675 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317784 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317895 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318007 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318082 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318155 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318247 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310797 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311387 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311996 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312227 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312375 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312778 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313147 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313328 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313362 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313394 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313710 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313909 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313894 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314095 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314638 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314812 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314968 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314788 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315052 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315173 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315201 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315613 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315721 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315757 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315894 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316440 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316486 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316937 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316548 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317010 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317166 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317162 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317292 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317342 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317807 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317971 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318013 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319039 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319110 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319552 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319575 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319968 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320085 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320360 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320420 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320475 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320529 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320581 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320866 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320932 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320991 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321045 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321108 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321200 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321275 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321258 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321418 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321577 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321707 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321789 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321848 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321905 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321959 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322016 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322074 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322129 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322191 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322246 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322304 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322382 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322448 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322500 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322555 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322606 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322689 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322746 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322798 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322846 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322906 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322950 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323026 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323078 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323129 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323309 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323367 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323425 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323462 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323498 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323538 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323574 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323613 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323669 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323703 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323637 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323738 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323862 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323901 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323837 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323931 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323905 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323962 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.324128 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.324213 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325189 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325258 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325357 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325413 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325527 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325533 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325567 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325571 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326104 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326139 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326140 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326267 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326317 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326327 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326381 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326486 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326527 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326495 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326564 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326598 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326627 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326686 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326722 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326753 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326786 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326810 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326846 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326874 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326904 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326928 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326953 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326981 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327007 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327030 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327180 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327211 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327238 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327275 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327302 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327328 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327370 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327395 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328997 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326559 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335153 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326571 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326641 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326794 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326854 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327103 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327447 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.327465 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.827407027 +0000 UTC m=+88.341864762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335397 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335477 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327694 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328097 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336384 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328336 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328344 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328553 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328563 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328724 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336487 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328878 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329113 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329227 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329245 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329422 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329455 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329684 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329711 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336584 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329988 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330016 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330632 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330635 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330122 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330812 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330846 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.331440 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.331562 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.331754 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332169 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332411 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332457 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332605 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332713 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332997 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332922 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333036 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333034 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333322 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333512 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334367 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334424 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334784 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334806 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334887 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335103 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335805 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326148 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335826 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335971 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336254 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336479 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330064 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337683 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337746 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336055 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337783 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337844 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337886 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337918 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337942 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337970 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337999 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338030 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338026 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338066 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338580 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338622 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338677 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338746 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338758 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339221 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339244 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339257 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339419 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339482 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339348 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339541 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339700 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339778 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339789 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339821 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339947 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339979 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340034 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340068 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340092 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340141 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340167 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340163 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340175 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340166 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340194 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340233 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340266 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340281 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340320 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340364 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340400 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340431 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340466 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340551 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340582 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340622 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340676 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340708 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340745 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340781 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340773 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341179 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341245 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341295 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341339 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341388 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341445 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341481 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341520 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341563 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341610 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341643 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341779 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341837 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341876 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341913 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341956 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341996 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342022 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342057 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342083 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342105 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342137 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342165 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342189 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342215 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342241 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342264 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342414 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342463 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342497 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342524 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-k8s-cni-cncf-io\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342555 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wqj\" (UniqueName: \"kubernetes.io/projected/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-kube-api-access-q8wqj\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342582 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342606 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342629 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342690 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9afb2de0-1fd9-4548-b02d-ba81525f51c8-host\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342721 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342748 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-netns\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342770 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-hostroot\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342801 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psrx\" (UniqueName: \"kubernetes.io/projected/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-kube-api-access-6psrx\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342826 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342850 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341187 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341810 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341987 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341991 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342038 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342473 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342688 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342716 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342991 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.343947 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.344183 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.344209 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342872 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9afb2de0-1fd9-4548-b02d-ba81525f51c8-serviceca\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.345561 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346152 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346198 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346558 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346895 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347138 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5wk\" (UniqueName: \"kubernetes.io/projected/9afb2de0-1fd9-4548-b02d-ba81525f51c8-kube-api-access-lx5wk\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347221 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347227 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347186 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347299 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347387 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347601 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347736 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-tmp-dir\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347772 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.347902 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347887 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.348044 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.848015957 +0000 UTC m=+88.362473692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348051 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347808 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348080 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348130 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348176 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-system-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348210 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348239 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348333 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5xr\" (UniqueName: \"kubernetes.io/projected/ce10664c-304a-460f-819a-bf71f3517fb3-kube-api-access-6z5xr\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348383 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-cnibin\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348418 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348539 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348539 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348634 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-system-cni-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-daemon-config\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348907 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349308 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349450 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349533 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce10664c-304a-460f-819a-bf71f3517fb3-rootfs\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349617 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce10664c-304a-460f-819a-bf71f3517fb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349853 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349944 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350073 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-socket-dir-parent\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350144 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350187 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350217 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350248 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350292 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350342 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350379 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr9k\" (UniqueName: \"kubernetes.io/projected/5bc15fae-a0c0-4032-b673-383e603fe393-kube-api-access-plr9k\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350419 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-bin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350460 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350481 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350499 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdmp\" (UniqueName: \"kubernetes.io/projected/5b49811f-e44a-43e9-80e6-15fcc9ed145f-kube-api-access-swdmp\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350709 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350743 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-os-release\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350837 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.350951 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350878 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.351102 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.851074438 +0000 UTC m=+88.365532173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351167 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-os-release\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351262 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351324 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351386 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351429 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351522 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351566 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cni-binary-copy\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351593 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-multus\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351619 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351698 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-kubelet\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351758 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351739 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351790 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351517 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351889 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352613 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352669 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-multus-certs\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352638 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352698 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-etc-kubernetes\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352741 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-hosts-file\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352479 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352795 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352798 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce10664c-304a-460f-819a-bf71f3517fb3-proxy-tls\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352825 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352858 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352113 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352902 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cnibin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352521 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352934 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-conf-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352966 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353190 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353211 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353227 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353242 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353260 5121 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353275 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353292 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353309 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353324 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353340 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353355 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353374 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353386 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353402 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353416 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353434 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353450 5121 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353466 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353483 5121 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353501 5121 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353518 5121 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353530 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353546 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353558 5121 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353570 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353582 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353598 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353614 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353630 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353643 5121 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353705 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353719 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353733 5121 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353752 5121 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353768 5121 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353782 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353796 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353814 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353829 5121 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353843 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353855 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353872 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353886 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353820 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353902 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353918 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353430 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353936 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352841 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354012 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354037 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354060 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353967 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354140 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354215 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354237 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354291 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355064 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354348 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355114 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355149 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355186 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355213 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355255 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355273 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355296 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355311 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355324 5121 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355350 5121 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355387 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355406 5121 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355422 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355443 5121 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355458 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355471 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355486 5121 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355503 5121 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355486 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355516 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.356997 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357031 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357046 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357142 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357593 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.358315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.358322 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357361 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.358751 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355203 5121 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.362474 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.364306 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.364837 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.365438 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.366194 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.368063 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371389 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371425 5121 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371440 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371451 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371465 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371479 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371494 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371511 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371522 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371534 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371545 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371557 5121 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371567 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371578 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371589 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371601 5121 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371612 5121 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371622 5121 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371632 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371643 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371668 5121 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371678 5121 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371688 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371697 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371707 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371717 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371727 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371736 5121 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371745 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371754 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371765 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371775 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371785 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371797 5121 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371808 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371818 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371830 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371839 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371848 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371857 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371866 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371877 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371886 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371895 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371904 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371915 5121 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371923 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371934 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371946 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371985 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371997 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372007 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372017 5121 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372026 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372037 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372047 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372057 5121 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372067 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372076 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372085 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372094 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372104 5121 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372114 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372124 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372133 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372143 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372152 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372163 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372173 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372182 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372190 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372200 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372209 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372219 5121 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372227 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372235 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372244 5121 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372252 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372260 5121 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372270 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372279 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372289 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372299 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372309 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372318 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372326 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372337 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372345 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372354 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372365 5121 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372377 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372389 5121 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372399 5121 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372407 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372415 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372423 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372431 5121 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372441 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372451 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372461 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372472 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372481 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372490 5121 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373686 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373707 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373719 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373793 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.873772944 +0000 UTC m=+88.388230679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375166 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375281 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375377 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375571 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.87553448 +0000 UTC m=+88.389992405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.375810 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.376202 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.377162 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.377630 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.377830 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378231 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378264 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378337 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378372 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378406 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378854 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.379201 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.379515 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.380356 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.380381 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.381568 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.382029 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.382701 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.383315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.383526 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384374 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384422 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384686 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384722 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384700 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385189 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385251 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385470 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385848 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.389736 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.394425 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.395742 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.400912 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.405229 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.408839 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.409978 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.419259 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.430991 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.444097 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.456104 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457665 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457727 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457744 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457757 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.472143 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473153 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473200 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473220 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9afb2de0-1fd9-4548-b02d-ba81525f51c8-host\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473241 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473260 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-netns\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473277 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-hostroot\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473296 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6psrx\" (UniqueName: \"kubernetes.io/projected/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-kube-api-access-6psrx\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473316 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473334 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473349 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9afb2de0-1fd9-4548-b02d-ba81525f51c8-serviceca\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.473367 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.473455 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.973427492 +0000 UTC m=+88.487885217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473540 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-hostroot\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473589 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9afb2de0-1fd9-4548-b02d-ba81525f51c8-host\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473372 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lx5wk\" (UniqueName: \"kubernetes.io/projected/9afb2de0-1fd9-4548-b02d-ba81525f51c8-kube-api-access-lx5wk\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473678 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-tmp-dir\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473708 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-netns\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473715 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473770 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-system-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473791 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473835 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5xr\" (UniqueName: \"kubernetes.io/projected/ce10664c-304a-460f-819a-bf71f3517fb3-kube-api-access-6z5xr\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-cnibin\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473879 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473886 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473916 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-system-cni-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473945 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-daemon-config\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473969 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473990 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474008 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce10664c-304a-460f-819a-bf71f3517fb3-rootfs\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474132 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-tmp-dir\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474215 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474238 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474247 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-system-cni-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474289 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474313 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474355 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-cnibin\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474481 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474531 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474915 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce10664c-304a-460f-819a-bf71f3517fb3-rootfs\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475387 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce10664c-304a-460f-819a-bf71f3517fb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475485 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-daemon-config\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475469 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475606 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-socket-dir-parent\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475681 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475700 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9afb2de0-1fd9-4548-b02d-ba81525f51c8-serviceca\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475760 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475797 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plr9k\" (UniqueName: \"kubernetes.io/projected/5bc15fae-a0c0-4032-b673-383e603fe393-kube-api-access-plr9k\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-bin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475842 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475900 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swdmp\" (UniqueName: \"kubernetes.io/projected/5b49811f-e44a-43e9-80e6-15fcc9ed145f-kube-api-access-swdmp\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475923 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-os-release\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475941 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475966 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-os-release\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476025 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-os-release\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476064 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476111 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-os-release\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476110 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-socket-dir-parent\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476142 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476194 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-bin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476054 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476412 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476438 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476457 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476487 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cni-binary-copy\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476558 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-multus\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476575 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476597 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-kubelet\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476617 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476637 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476695 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-multus\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-kubelet\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476727 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-system-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476897 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476904 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476976 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477336 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477666 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477720 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477737 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477811 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-multus-certs\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477762 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-multus-certs\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477840 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477850 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-etc-kubernetes\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477839 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477873 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-hosts-file\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477905 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-etc-kubernetes\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce10664c-304a-460f-819a-bf71f3517fb3-proxy-tls\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477946 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cnibin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477987 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-conf-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477986 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-hosts-file\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478066 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478088 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478124 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478144 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-k8s-cni-cncf-io\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478170 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8wqj\" (UniqueName: \"kubernetes.io/projected/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-kube-api-access-q8wqj\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478197 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478278 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478291 5121 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478303 5121 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478314 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478324 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478335 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478345 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478349 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce10664c-304a-460f-819a-bf71f3517fb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478356 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478385 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478390 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478416 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478425 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478435 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478445 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478454 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478463 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478473 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478483 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478493 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478504 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478513 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478522 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478531 5121 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478540 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478550 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478558 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478567 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478579 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478588 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478599 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478610 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478619 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478629 5121 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478638 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478667 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478678 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478688 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478699 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478710 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478721 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478723 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-k8s-cni-cncf-io\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478733 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478794 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478817 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478829 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478837 5121 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478864 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478879 5121 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478890 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478901 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478911 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478925 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478938 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478983 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-conf-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.479001 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cnibin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.479011 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.479622 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.480042 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.480470 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.481052 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cni-binary-copy\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.481877 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce10664c-304a-460f-819a-bf71f3517fb3-proxy-tls\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.482834 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.485188 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.489415 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.489914 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.490878 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5xr\" (UniqueName: \"kubernetes.io/projected/ce10664c-304a-460f-819a-bf71f3517fb3-kube-api-access-6z5xr\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.493436 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.493937 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psrx\" (UniqueName: \"kubernetes.io/projected/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-kube-api-access-6psrx\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.497335 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx5wk\" (UniqueName: \"kubernetes.io/projected/9afb2de0-1fd9-4548-b02d-ba81525f51c8-kube-api-access-lx5wk\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.498802 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr9k\" (UniqueName: \"kubernetes.io/projected/5bc15fae-a0c0-4032-b673-383e603fe393-kube-api-access-plr9k\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.498982 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.499946 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.500919 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdmp\" (UniqueName: \"kubernetes.io/projected/5b49811f-e44a-43e9-80e6-15fcc9ed145f-kube-api-access-swdmp\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.501532 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8wqj\" (UniqueName: \"kubernetes.io/projected/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-kube-api-access-q8wqj\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.512247 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.514336 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.517888 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: source /etc/kubernetes/apiserver-url.env Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.519086 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.526131 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.527512 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801 WatchSource:0}: Error finding container 0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801: Status 404 returned error can't find the container with id 0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801 Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.529331 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.535062 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 18 00:10:04 crc kubenswrapper[5121]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 18 00:10:04 crc kubenswrapper[5121]: ho_enable="--enable-hybrid-overlay" Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 18 00:10:04 crc kubenswrapper[5121]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 18 00:10:04 crc kubenswrapper[5121]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-host=127.0.0.1 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-port=9743 \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ho_enable} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-approver \ Feb 18 00:10:04 crc kubenswrapper[5121]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --wait-for-kubernetes-api=200s \ Feb 18 00:10:04 crc kubenswrapper[5121]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.536121 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.540253 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-webhook \ Feb 18 00:10:04 crc kubenswrapper[5121]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.540860 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156 WatchSource:0}: Error finding container e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156: Status 404 returned error can't find the container with id e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.541474 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.542763 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.543543 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.546795 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.550935 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.552049 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.558965 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.559397 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561661 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561922 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561982 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.562058 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: while [ true ]; Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: for f in $(ls /tmp/serviceca); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $f Feb 18 00:10:04 crc kubenswrapper[5121]: ca_file_path="/tmp/serviceca/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ -e "${reg_dir_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: mkdir $reg_dir_path Feb 18 00:10:04 crc kubenswrapper[5121]: cp $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: for d in $(ls /etc/docker/certs.d); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $d Feb 18 00:10:04 crc kubenswrapper[5121]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_conf_path="/tmp/serviceca/${dp}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ ! -e "${reg_conf_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: rm -rf /etc/docker/certs.d/$d Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait ${!} Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-vsc9f_openshift-image-registry(9afb2de0-1fd9-4548-b02d-ba81525f51c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.563189 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-vsc9f" podUID="9afb2de0-1fd9-4548-b02d-ba81525f51c8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.565865 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.571413 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce10664c_304a_460f_819a_bf71f3517fb3.slice/crio-176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b WatchSource:0}: Error finding container 176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b: Status 404 returned error can't find the container with id 176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.573226 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51dcc4ed_63a2_4a92_936e_8ef22eca20d6.slice/crio-6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33 WatchSource:0}: Error finding container 6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33: Status 404 returned error can't find the container with id 6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33 Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.575151 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.578315 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.578802 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 18 00:10:04 crc kubenswrapper[5121]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 18 00:10:04 crc kubenswrapper[5121]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6psrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9dxsb_openshift-multus(51dcc4ed-63a2-4a92-936e-8ef22eca20d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.579216 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.580095 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9dxsb" podUID="51dcc4ed-63a2-4a92-936e-8ef22eca20d6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.582897 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.588213 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.589402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.590491 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.592431 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47fedd5_33a0_43c1_9e5d_c31c88d07fb8.slice/crio-c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601 WatchSource:0}: Error finding container c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601: Status 404 returned error can't find the container with id c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.596467 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -uo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: HOSTS_FILE="/etc/hosts" Feb 18 00:10:04 crc kubenswrapper[5121]: TEMP_FILE="/tmp/hosts.tmp" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Make a temporary file with the old hosts file's attributes. Feb 18 00:10:04 crc kubenswrapper[5121]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Failed to preserve hosts file. Exiting." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: while true; do Feb 18 00:10:04 crc kubenswrapper[5121]: declare -A svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${services[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: # Fetch service IP from cluster dns if present. We make several tries Feb 18 00:10:04 crc kubenswrapper[5121]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 18 00:10:04 crc kubenswrapper[5121]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 18 00:10:04 crc kubenswrapper[5121]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 18 00:10:04 crc kubenswrapper[5121]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 18 00:10:04 crc kubenswrapper[5121]: for i in ${!cmds[*]} Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: ips=($(eval "${cmds[i]}")) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: svc_ips["${svc}"]="${ips[@]}" Feb 18 00:10:04 crc kubenswrapper[5121]: break Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Update /etc/hosts only if we get valid service IPs Feb 18 00:10:04 crc kubenswrapper[5121]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 18 00:10:04 crc kubenswrapper[5121]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 18 00:10:04 crc kubenswrapper[5121]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Append resolver entries for services Feb 18 00:10:04 crc kubenswrapper[5121]: rc=0 Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${!svc_ips[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: for ip in ${svc_ips[${svc}]}; do Feb 18 00:10:04 crc kubenswrapper[5121]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ $rc -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 18 00:10:04 crc kubenswrapper[5121]: # Replace /etc/hosts with our modified version if needed Feb 18 00:10:04 crc kubenswrapper[5121]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 18 00:10:04 crc kubenswrapper[5121]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: unset svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8wqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-tqxjt_openshift-dns(b47fedd5-33a0-43c1-9e5d-c31c88d07fb8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.596969 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec6f87b_86e0_4893_9709_9dc7381bc95a.slice/crio-8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047 WatchSource:0}: Error finding container 8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047: Status 404 returned error can't find the container with id 8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.597620 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-tqxjt" podUID="b47fedd5-33a0-43c1-9e5d-c31c88d07fb8" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.600719 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 18 00:10:04 crc kubenswrapper[5121]: apiVersion: v1 Feb 18 00:10:04 crc kubenswrapper[5121]: clusters: Feb 18 00:10:04 crc kubenswrapper[5121]: - cluster: Feb 18 00:10:04 crc kubenswrapper[5121]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: server: https://api-int.crc.testing:6443 Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: contexts: Feb 18 00:10:04 crc kubenswrapper[5121]: - context: Feb 18 00:10:04 crc kubenswrapper[5121]: cluster: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: namespace: default Feb 18 00:10:04 crc kubenswrapper[5121]: user: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: current-context: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: kind: Config Feb 18 00:10:04 crc kubenswrapper[5121]: preferences: {} Feb 18 00:10:04 crc kubenswrapper[5121]: users: Feb 18 00:10:04 crc kubenswrapper[5121]: - name: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: user: Feb 18 00:10:04 crc kubenswrapper[5121]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: EOF Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfl5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-7tprw_openshift-ovn-kubernetes(0ec6f87b-86e0-4893-9709-9dc7381bc95a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.601861 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.602429 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.607403 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bc15fae_a0c0_4032_b673_383e603fe393.slice/crio-656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949 WatchSource:0}: Error finding container 656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949: Status 404 returned error can't find the container with id 656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949 Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.608182 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9cd074_60f6_4754_9ef8_567f9274e384.slice/crio-3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540 WatchSource:0}: Error finding container 3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540: Status 404 returned error can't find the container with id 3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.610099 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-n2m5r_openshift-multus(5bc15fae-a0c0-4032-b673-383e603fe393): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.611485 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" podUID="5bc15fae-a0c0-4032-b673-383e603fe393" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.612445 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -euo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 18 00:10:04 crc kubenswrapper[5121]: # As the secret mount is optional we must wait for the files to be present. Feb 18 00:10:04 crc kubenswrapper[5121]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 18 00:10:04 crc kubenswrapper[5121]: TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=0 Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs(){ Feb 18 00:10:04 crc kubenswrapper[5121]: CUR_TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: } Feb 18 00:10:04 crc kubenswrapper[5121]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 5 Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/kube-rbac-proxy \ Feb 18 00:10:04 crc kubenswrapper[5121]: --logtostderr \ Feb 18 00:10:04 crc kubenswrapper[5121]: --secure-listen-address=:9108 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --upstream=http://127.0.0.1:29108/ \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-private-key-file=${TLS_PK} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cert-file=${TLS_CERT} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.612461 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.615799 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # This is needed so that converting clusters from GA to TP Feb 18 00:10:04 crc kubenswrapper[5121]: # will rollout control plane pods as well Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" != "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable multi-network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable admin network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: if [ "shared" == "shared" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode shared" Feb 18 00:10:04 crc kubenswrapper[5121]: elif [ "shared" == "local" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode local" Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --init-cluster-manager "${K8S_NODE}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-bind-address "127.0.0.1:29108" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-pprof \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-config-duration \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${dns_name_resolver_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${persistent_ips_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${network_segmentation_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${gateway_mode_flags} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${route_advertisements_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${preconfigured_udn_addresses_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-ip=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-firewall=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-qos=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-service=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multicast \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multi-external-gateway=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_policy_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${admin_network_policy_enabled_flag} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.617139 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.626308 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.646342 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.647542 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerStarted","Data":"6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.648951 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.649995 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 18 00:10:04 crc kubenswrapper[5121]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 18 00:10:04 crc kubenswrapper[5121]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6psrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9dxsb_openshift-multus(51dcc4ed-63a2-4a92-936e-8ef22eca20d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.650433 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerStarted","Data":"3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.650459 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.651109 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-n2m5r_openshift-multus(5bc15fae-a0c0-4032-b673-383e603fe393): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.651192 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9dxsb" podUID="51dcc4ed-63a2-4a92-936e-8ef22eca20d6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.651578 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.652343 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" podUID="5bc15fae-a0c0-4032-b673-383e603fe393" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.652958 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 18 00:10:04 crc kubenswrapper[5121]: apiVersion: v1 Feb 18 00:10:04 crc kubenswrapper[5121]: clusters: Feb 18 00:10:04 crc kubenswrapper[5121]: - cluster: Feb 18 00:10:04 crc kubenswrapper[5121]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: server: https://api-int.crc.testing:6443 Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: contexts: Feb 18 00:10:04 crc kubenswrapper[5121]: - context: Feb 18 00:10:04 crc kubenswrapper[5121]: cluster: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: namespace: default Feb 18 00:10:04 crc kubenswrapper[5121]: user: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: current-context: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: kind: Config Feb 18 00:10:04 crc kubenswrapper[5121]: preferences: {} Feb 18 00:10:04 crc kubenswrapper[5121]: users: Feb 18 00:10:04 crc kubenswrapper[5121]: - name: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: user: Feb 18 00:10:04 crc kubenswrapper[5121]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: EOF Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfl5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-7tprw_openshift-ovn-kubernetes(0ec6f87b-86e0-4893-9709-9dc7381bc95a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.653124 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.653459 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -euo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 18 00:10:04 crc kubenswrapper[5121]: # As the secret mount is optional we must wait for the files to be present. Feb 18 00:10:04 crc kubenswrapper[5121]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 18 00:10:04 crc kubenswrapper[5121]: TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=0 Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs(){ Feb 18 00:10:04 crc kubenswrapper[5121]: CUR_TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: } Feb 18 00:10:04 crc kubenswrapper[5121]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 5 Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/kube-rbac-proxy \ Feb 18 00:10:04 crc kubenswrapper[5121]: --logtostderr \ Feb 18 00:10:04 crc kubenswrapper[5121]: --secure-listen-address=:9108 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --upstream=http://127.0.0.1:29108/ \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-private-key-file=${TLS_PK} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cert-file=${TLS_CERT} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.654369 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.654402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.654762 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tqxjt" event={"ID":"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8","Type":"ContainerStarted","Data":"c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.655940 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # This is needed so that converting clusters from GA to TP Feb 18 00:10:04 crc kubenswrapper[5121]: # will rollout control plane pods as well Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" != "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable multi-network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable admin network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: if [ "shared" == "shared" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode shared" Feb 18 00:10:04 crc kubenswrapper[5121]: elif [ "shared" == "local" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode local" Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --init-cluster-manager "${K8S_NODE}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-bind-address "127.0.0.1:29108" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-pprof \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-config-duration \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${dns_name_resolver_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${persistent_ips_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${network_segmentation_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${gateway_mode_flags} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${route_advertisements_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${preconfigured_udn_addresses_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-ip=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-firewall=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-qos=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-service=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multicast \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multi-external-gateway=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_policy_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${admin_network_policy_enabled_flag} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.656624 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -uo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: HOSTS_FILE="/etc/hosts" Feb 18 00:10:04 crc kubenswrapper[5121]: TEMP_FILE="/tmp/hosts.tmp" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Make a temporary file with the old hosts file's attributes. Feb 18 00:10:04 crc kubenswrapper[5121]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Failed to preserve hosts file. Exiting." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: while true; do Feb 18 00:10:04 crc kubenswrapper[5121]: declare -A svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${services[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: # Fetch service IP from cluster dns if present. We make several tries Feb 18 00:10:04 crc kubenswrapper[5121]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 18 00:10:04 crc kubenswrapper[5121]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 18 00:10:04 crc kubenswrapper[5121]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 18 00:10:04 crc kubenswrapper[5121]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 18 00:10:04 crc kubenswrapper[5121]: for i in ${!cmds[*]} Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: ips=($(eval "${cmds[i]}")) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: svc_ips["${svc}"]="${ips[@]}" Feb 18 00:10:04 crc kubenswrapper[5121]: break Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Update /etc/hosts only if we get valid service IPs Feb 18 00:10:04 crc kubenswrapper[5121]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 18 00:10:04 crc kubenswrapper[5121]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 18 00:10:04 crc kubenswrapper[5121]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Append resolver entries for services Feb 18 00:10:04 crc kubenswrapper[5121]: rc=0 Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${!svc_ips[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: for ip in ${svc_ips[${svc}]}; do Feb 18 00:10:04 crc kubenswrapper[5121]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ $rc -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 18 00:10:04 crc kubenswrapper[5121]: # Replace /etc/hosts with our modified version if needed Feb 18 00:10:04 crc kubenswrapper[5121]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 18 00:10:04 crc kubenswrapper[5121]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: unset svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8wqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-tqxjt_openshift-dns(b47fedd5-33a0-43c1-9e5d-c31c88d07fb8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.656824 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vsc9f" event={"ID":"9afb2de0-1fd9-4548-b02d-ba81525f51c8","Type":"ContainerStarted","Data":"98a363ced3134374ccc1e6a70830a1969dac263587609cb7047c0bddad1bd9be"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.657055 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.657696 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-tqxjt" podUID="b47fedd5-33a0-43c1-9e5d-c31c88d07fb8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.658967 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.659259 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: while [ true ]; Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: for f in $(ls /tmp/serviceca); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $f Feb 18 00:10:04 crc kubenswrapper[5121]: ca_file_path="/tmp/serviceca/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ -e "${reg_dir_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: mkdir $reg_dir_path Feb 18 00:10:04 crc kubenswrapper[5121]: cp $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: for d in $(ls /etc/docker/certs.d); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $d Feb 18 00:10:04 crc kubenswrapper[5121]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_conf_path="/tmp/serviceca/${dp}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ ! -e "${reg_conf_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: rm -rf /etc/docker/certs.d/$d Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait ${!} Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-vsc9f_openshift-image-registry(9afb2de0-1fd9-4548-b02d-ba81525f51c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.658959 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.660397 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.660619 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-vsc9f" podUID="9afb2de0-1fd9-4548-b02d-ba81525f51c8" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.660630 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.662058 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.662176 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"142f908f3b7f173342d28521f70a27a943663aa51661d2dadfa6626fc9f5086e"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.663228 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 18 00:10:04 crc kubenswrapper[5121]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 18 00:10:04 crc kubenswrapper[5121]: ho_enable="--enable-hybrid-overlay" Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 18 00:10:04 crc kubenswrapper[5121]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 18 00:10:04 crc kubenswrapper[5121]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-host=127.0.0.1 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-port=9743 \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ho_enable} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-approver \ Feb 18 00:10:04 crc kubenswrapper[5121]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --wait-for-kubernetes-api=200s \ Feb 18 00:10:04 crc kubenswrapper[5121]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664277 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664336 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664367 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664391 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664406 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.665596 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-webhook \ Feb 18 00:10:04 crc kubenswrapper[5121]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.667701 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.668120 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.669246 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: source /etc/kubernetes/apiserver-url.env Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.672318 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.680794 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.690097 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.702261 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.713763 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.723379 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.732720 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.742847 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.763233 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768384 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768619 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768909 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.769045 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.779067 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.799322 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.859716 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.873912 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874144 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874264 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874386 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874518 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884460 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884586 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884638 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884825 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884870 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884888 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884901 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884872 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884956 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884962 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884965 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.884939508 +0000 UTC m=+89.399397393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884995 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.884978289 +0000 UTC m=+89.399436024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884992 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.885013 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.88500654 +0000 UTC m=+89.399464275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.885166 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.885135683 +0000 UTC m=+89.399593458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.885348 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.885329768 +0000 UTC m=+89.399787543 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.892498 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.919021 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.959790 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976609 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976701 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976720 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976748 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976768 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.985313 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.985461 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.985523 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.985507191 +0000 UTC m=+89.499964926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.000240 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.045108 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080539 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080750 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080829 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080859 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.083510 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.123249 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.164115 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183151 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183255 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183310 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183330 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.199352 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.242669 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.270872 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.270882 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.271058 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.271407 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.277942 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.279465 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.279815 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.282816 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285613 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285728 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285757 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285793 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285818 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.286635 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.291988 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.297522 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.299939 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.301593 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.303080 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.305323 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.308024 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.312571 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.315231 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.319714 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.320273 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.322449 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.323714 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.326172 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.327931 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.330075 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.331684 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.335971 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.337326 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.338791 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.340216 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.342848 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.346222 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.348816 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.351745 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.355101 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.356060 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.358545 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.359540 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.362128 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.368386 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.373186 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.374834 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.376290 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.377587 5121 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.377768 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.383619 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.385985 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389225 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389276 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389288 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389309 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389324 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.390111 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.391628 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.392934 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.395218 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.397210 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.397915 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.398344 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.399872 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.401557 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.403942 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.404989 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.406879 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.408079 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.409542 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.410891 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.413553 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.414548 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.416221 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.417597 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.443099 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.485372 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491588 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491736 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491790 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.536524 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.563455 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.564461 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.564717 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.564801 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594503 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594556 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594589 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594604 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.604508 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.641696 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.692720 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697512 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697599 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697620 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697678 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697700 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.719721 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.763004 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800793 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800814 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800827 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.801899 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.843893 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.894775 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895094 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895171 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895229 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895289 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895461 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895548 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895524523 +0000 UTC m=+91.409982278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895553 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895575 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895590 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895626 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895616576 +0000 UTC m=+91.410074311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895721 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895763 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895753429 +0000 UTC m=+91.410211174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895931 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895982 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895916304 +0000 UTC m=+91.410374109 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.896004 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.896067 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.896242 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.896219091 +0000 UTC m=+91.410676866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904463 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904545 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904576 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904611 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904638 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.996506 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.996819 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.996962 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.996934328 +0000 UTC m=+91.511392063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007453 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007487 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007498 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110337 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110480 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110496 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110515 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110526 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213417 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213475 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213504 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213517 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.270567 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.270712 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.270899 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.271145 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316160 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316223 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316234 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316263 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418725 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418756 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.419909 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.419991 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.420004 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.420022 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.420034 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.432753 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437275 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437331 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437350 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437370 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437385 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.454347 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459530 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459601 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459614 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459635 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459665 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.470710 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.475852 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.475952 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.475977 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.476006 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.476034 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.492734 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497297 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497371 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497383 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497418 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.510446 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.510599 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521123 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521146 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521175 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521197 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623608 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623704 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623726 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623762 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623784 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.727932 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728002 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728050 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728066 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830716 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830797 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830809 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830828 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830840 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933801 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933818 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933830 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036821 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036889 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036905 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036929 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036946 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140312 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140373 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140414 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.242942 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243034 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243059 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243089 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243113 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.270343 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.270390 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.270621 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.270803 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.293301 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.305541 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.314896 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.327067 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345342 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345440 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345461 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345513 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.349789 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.366634 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.382088 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.392787 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.404554 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.414042 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.425241 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.439554 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448171 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448245 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448258 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448279 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448292 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.453384 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.470715 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.488494 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.517337 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.535875 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551246 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551317 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551342 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551377 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551395 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.552426 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.564912 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654524 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654551 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654578 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654598 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758360 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758424 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758442 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758455 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861711 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861803 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861823 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861843 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861858 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921386 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921528 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921667 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.921595035 +0000 UTC m=+95.436052770 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921730 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921753 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921766 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921783 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921834 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921849 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.921826071 +0000 UTC m=+95.436283806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921921 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921983 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922006 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922061 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922075 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.922028016 +0000 UTC m=+95.436485791 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922126 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.922116438 +0000 UTC m=+95.436574173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922245 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922316 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.922300313 +0000 UTC m=+95.436758058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966320 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966439 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.023448 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.023695 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.023844 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:12.023817421 +0000 UTC m=+95.538275156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069184 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069246 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069260 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069280 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069295 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171846 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171889 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171916 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171929 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.269792 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.269935 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.270445 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.270506 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.273960 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.273981 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.273990 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.274001 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.274012 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.334504 5121 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376419 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376491 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376510 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376540 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376562 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.394388 5121 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.479911 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480009 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480039 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480072 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480097 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583267 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583356 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583370 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583393 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583412 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.685953 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686017 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686028 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686044 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686056 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788929 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788955 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788966 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891701 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891754 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891769 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891790 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891802 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994700 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994796 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994835 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994855 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097814 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097874 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097888 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097905 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097917 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201087 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201142 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201152 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201170 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201182 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.269997 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:09 crc kubenswrapper[5121]: E0218 00:10:09.270212 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.270594 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:09 crc kubenswrapper[5121]: E0218 00:10:09.270801 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303931 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303960 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303981 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.406394 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.406805 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.406975 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.407128 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.407297 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510160 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510185 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510215 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510238 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613228 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613307 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613318 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613340 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613352 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716431 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716527 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716545 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716575 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716591 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.819859 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.819970 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.819990 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.820018 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.820036 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922536 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922612 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922685 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922704 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025018 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025120 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025145 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025160 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128233 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128294 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128306 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128339 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230678 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230758 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230773 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230798 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230815 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.270175 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.270258 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:10 crc kubenswrapper[5121]: E0218 00:10:10.270402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:10 crc kubenswrapper[5121]: E0218 00:10:10.270619 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333355 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333438 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333462 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333479 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436526 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436551 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539827 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539860 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539882 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642161 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642234 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642253 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642283 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642304 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744414 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744515 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744547 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744579 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744604 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847327 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847427 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847479 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950674 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950738 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950777 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950801 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950813 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053578 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053675 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053693 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053735 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156339 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156357 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156384 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156404 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259502 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259569 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259591 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259615 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259639 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.270225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.270402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.270746 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.270925 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363499 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363610 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363684 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363726 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363755 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467489 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467542 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467555 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467586 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570828 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570914 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570943 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570976 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570998 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673894 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673908 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673924 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673935 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776626 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776725 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776745 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776768 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776782 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879014 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879083 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879128 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879147 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973511 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973778 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.973844 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.973797959 +0000 UTC m=+103.488255734 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973921 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.973953 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973989 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974059 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974033115 +0000 UTC m=+103.488490880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974107 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974190 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974225 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974122 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974318 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974364 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974392 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974364 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974316132 +0000 UTC m=+103.488773917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974518 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974490718 +0000 UTC m=+103.488948573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974548 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974532319 +0000 UTC m=+103.488990204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.982955 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983013 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983033 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983058 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983077 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.075561 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.075870 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.076028 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:20.075995105 +0000 UTC m=+103.590452880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.085926 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086036 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086052 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086096 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.188850 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.188950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.188972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.189005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.189028 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.269983 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.270069 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.270181 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.270428 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291411 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291497 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291512 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291533 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291547 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.393908 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.393999 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.394020 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.394048 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.394067 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496581 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496615 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496694 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496732 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599469 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599495 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599527 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599549 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702261 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702309 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702320 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702338 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702348 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805357 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805408 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805421 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908021 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908078 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908088 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908121 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010802 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010878 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010898 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010926 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010947 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114566 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114654 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114712 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114765 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.216965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217052 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217078 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217112 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217138 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.270503 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:13 crc kubenswrapper[5121]: E0218 00:10:13.270714 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.270911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:13 crc kubenswrapper[5121]: E0218 00:10:13.271256 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319541 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319597 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319656 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319738 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421817 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421898 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421911 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421954 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421969 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524726 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524741 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524762 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524774 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626801 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626871 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626883 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626901 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626913 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729120 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729185 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729198 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729218 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729231 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832374 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832464 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832484 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832505 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832525 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.935950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936034 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936072 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936111 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039244 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039260 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039284 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039300 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142205 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142278 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142298 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142312 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245727 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245787 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245817 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245829 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.269749 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:14 crc kubenswrapper[5121]: E0218 00:10:14.269889 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.269931 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:14 crc kubenswrapper[5121]: E0218 00:10:14.270123 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347403 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347453 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347466 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347496 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450259 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450313 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450323 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450337 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450346 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.552897 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553042 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553063 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553089 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553107 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656415 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656458 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656480 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759388 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759449 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759489 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759500 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862701 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862712 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862743 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965869 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965931 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965941 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965963 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965975 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071781 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071794 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071824 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174409 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174431 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174444 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.269936 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.270276 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:15 crc kubenswrapper[5121]: E0218 00:10:15.270273 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:15 crc kubenswrapper[5121]: E0218 00:10:15.270579 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276038 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276090 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276123 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276139 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378534 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378602 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378616 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378723 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.481871 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.481964 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.481994 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.482027 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.482053 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584825 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584856 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584881 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689113 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689403 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689417 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689439 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689456 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.696474 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerStarted","Data":"5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.719161 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.743370 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.758630 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.776211 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.790374 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792296 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792359 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792399 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792416 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.818450 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.829897 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.843699 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.859771 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.878780 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.891242 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894858 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894909 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894923 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894951 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894968 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.906527 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.917014 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.929019 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.939364 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.953111 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.970358 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.982092 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.995075 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996670 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996719 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996740 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996764 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996782 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.012181 5121 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099436 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099448 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099480 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202457 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202533 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202551 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202580 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202599 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.269938 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.270445 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.270840 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.271082 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305151 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305217 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305235 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305262 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305280 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410860 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410917 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410931 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410964 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512665 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512740 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512754 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512774 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512785 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615066 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615215 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615306 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615519 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.702001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.705812 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.705856 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.709175 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" exitCode=0 Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.709204 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714330 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714423 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714511 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714579 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714638 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.723455 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.727102 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738891 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738975 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.754777 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760188 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760227 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760238 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760264 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.762558 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.771967 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777696 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777743 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777755 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777775 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777788 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.783156 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.794378 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799091 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799150 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799174 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799198 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799151 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799217 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.808348 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.809875 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.810406 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812240 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812390 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812573 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812724 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.826100 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.835498 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.853964 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.864772 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.879338 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.892947 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.904704 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.914335 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915628 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915721 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915768 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915787 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.926172 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.938538 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.959039 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.978725 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.993144 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.008859 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018955 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018993 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.019006 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.024894 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.037961 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.051545 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.066822 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.094397 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.109469 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.121750 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122572 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122627 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122655 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122680 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.127225 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.140950 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.159244 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.170238 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.180406 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.188080 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.203170 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.216985 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224442 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224494 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224510 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224522 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.231652 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.241869 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.254110 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.264381 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.276142 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:17 crc kubenswrapper[5121]: E0218 00:10:17.276346 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.276711 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:17 crc kubenswrapper[5121]: E0218 00:10:17.276917 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.276966 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.296241 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.305898 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.324151 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326159 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326245 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326268 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326292 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326311 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.340869 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.352723 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.365727 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.379828 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.389906 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.400983 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.410776 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.422916 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.428932 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.428977 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.428987 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.429002 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.429014 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.434194 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.447286 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.470399 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.488148 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.501821 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.512482 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.530718 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.532538 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.532617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.532632 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.533165 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.533192 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.541112 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686298 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686395 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686418 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686432 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.716055 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5" exitCode=0 Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.716129 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721300 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721363 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721374 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721384 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721394 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.723295 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tqxjt" event={"ID":"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8","Type":"ContainerStarted","Data":"84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.730252 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.742609 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.757421 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.769533 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.782150 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789734 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789802 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789821 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789870 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.791442 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.804101 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.816410 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.830632 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.843523 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.856285 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.872571 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.889001 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893461 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893475 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893496 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893509 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.910193 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.926883 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.941494 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.953533 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.975302 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.987249 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995928 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995948 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995962 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.001060 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.017188 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.028079 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.039706 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.054148 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.073191 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.086170 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099126 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099181 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099192 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099210 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099228 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099946 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.108972 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.160323 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.195852 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201616 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201658 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201680 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201706 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.231286 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.270430 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.270509 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:18 crc kubenswrapper[5121]: E0218 00:10:18.270672 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:18 crc kubenswrapper[5121]: E0218 00:10:18.270803 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.272467 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304468 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304481 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304500 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304512 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.311576 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.363029 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.393885 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408446 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408499 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408537 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408555 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.432547 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.473389 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.511641 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514478 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514527 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514542 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514565 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514579 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619438 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619864 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619879 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619898 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619910 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722221 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722611 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722621 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722636 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722649 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.732711 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.735288 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.749897 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.769603 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.781339 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.794446 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.802291 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.816439 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825739 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825834 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825852 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.826047 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.836250 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.872307 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.914496 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927925 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927943 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927970 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927987 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.953727 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.991766 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031739 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031823 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031870 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.033448 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.074234 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.110091 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134175 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134247 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134271 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134297 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134317 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.156438 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.196637 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.235729 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.236930 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237029 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237075 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237162 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.270945 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.270948 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:19 crc kubenswrapper[5121]: E0218 00:10:19.271250 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:19 crc kubenswrapper[5121]: E0218 00:10:19.271456 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.276551 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339621 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339632 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339652 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339680 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.442531 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443068 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443240 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443416 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443621 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547137 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547204 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547222 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547249 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547269 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649840 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649921 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649974 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649993 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.745213 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.748272 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676" exitCode=0 Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.748391 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.753529 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerStarted","Data":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vsc9f" event={"ID":"9afb2de0-1fd9-4548-b02d-ba81525f51c8","Type":"ContainerStarted","Data":"e5cc3e9aeadca22e5dc4792e3db2c4fdc6c8481677cbd38d1a08b98cef00504c"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754936 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754968 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754978 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754989 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754999 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.762854 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.779845 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.790629 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.817198 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.831981 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.847252 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.858665 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859351 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859397 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859436 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859450 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.872453 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.885357 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.898028 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.907201 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.917046 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.926988 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.938512 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.955380 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967377 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967769 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967797 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967842 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967852 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.978429 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.989531 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013754 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013853 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013900 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013929 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013966 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014056 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014118 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014099248 +0000 UTC m=+119.528556993 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014456 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014444917 +0000 UTC m=+119.528902672 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014559 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014574 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014586 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014616 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014607681 +0000 UTC m=+119.529065426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014689 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014715 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014708183 +0000 UTC m=+119.529165928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014764 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014775 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014783 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014808 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014800526 +0000 UTC m=+119.529258271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.040683 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071213 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071333 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071375 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071390 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.072997 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.115627 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.115922 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.116164 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.116138968 +0000 UTC m=+119.630596693 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.117115 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.153853 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174029 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174115 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.195276 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.235377 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e5cc3e9aeadca22e5dc4792e3db2c4fdc6c8481677cbd38d1a08b98cef00504c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.269842 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.270019 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.270191 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.270324 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.271545 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.271882 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276641 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276767 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276787 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276834 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276859 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.290381 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.316392 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.358377 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379530 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379552 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379574 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379591 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.392996 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.433774 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.473872 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.481910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.481970 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.481983 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.482005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.482018 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.512094 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.553287 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.584713 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585110 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585208 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585321 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585436 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.592395 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.632541 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.679155 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.687949 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688008 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688022 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688046 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688061 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.716030 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.751453 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.760673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"84ca5e1e4b35397de8f78366548363a661feb4d56e2620632adfb38fece38466"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.760848 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d264212f574ac694a4d2414e785c3d7f289fd6e5e6b18def1902e17badf38968"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.762614 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993" exitCode=0 Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.762689 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.765509 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerStarted","Data":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.770172 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790020 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790100 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790116 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790144 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790161 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.792180 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.833190 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.881947 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892003 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892053 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892066 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892097 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.913555 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.954269 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ca5e1e4b35397de8f78366548363a661feb4d56e2620632adfb38fece38466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d264212f574ac694a4d2414e785c3d7f289fd6e5e6b18def1902e17badf38968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994235 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e5cc3e9aeadca22e5dc4792e3db2c4fdc6c8481677cbd38d1a08b98cef00504c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994832 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994877 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994888 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994902 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994913 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.044433 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.073257 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096894 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096935 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096959 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096968 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.111582 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.152722 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.194510 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198717 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198760 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198772 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198797 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198814 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.233899 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.271080 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.278247 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.278284 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:21 crc kubenswrapper[5121]: E0218 00:10:21.278444 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:21 crc kubenswrapper[5121]: E0218 00:10:21.278590 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.300954 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.301196 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.301453 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.302129 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.302224 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.315190 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.352782 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.393257 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405838 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405859 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405886 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405909 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.438796 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.474287 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.508965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509038 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509065 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509099 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509125 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.524396 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.595566 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611189 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611235 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611247 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611264 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611274 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713341 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713363 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713376 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.781239 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="9eb9b4520abbd7e9304ca9519934fdcaf9dd7220dde2d520336c2cd5252af409" exitCode=0 Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.781343 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"9eb9b4520abbd7e9304ca9519934fdcaf9dd7220dde2d520336c2cd5252af409"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815612 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815733 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815757 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815786 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815810 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.882949 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tqxjt" podStartSLOduration=83.882927394 podStartE2EDuration="1m23.882927394s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:21.849772731 +0000 UTC m=+105.364230546" watchObservedRunningTime="2026-02-18 00:10:21.882927394 +0000 UTC m=+105.397385129" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918473 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918567 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918599 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918625 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.959758 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.959738393 podStartE2EDuration="17.959738393s" podCreationTimestamp="2026-02-18 00:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:21.959398414 +0000 UTC m=+105.473856169" watchObservedRunningTime="2026-02-18 00:10:21.959738393 +0000 UTC m=+105.474196128" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.973972 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.973937582 podStartE2EDuration="18.973937582s" podCreationTimestamp="2026-02-18 00:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:21.973142371 +0000 UTC m=+105.487600196" watchObservedRunningTime="2026-02-18 00:10:21.973937582 +0000 UTC m=+105.488395357" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021237 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021275 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021290 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.025761 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podStartSLOduration=84.025744753 podStartE2EDuration="1m24.025744753s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.025156168 +0000 UTC m=+105.539613983" watchObservedRunningTime="2026-02-18 00:10:22.025744753 +0000 UTC m=+105.540202489" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.080773 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9dxsb" podStartSLOduration=84.080738471 podStartE2EDuration="1m24.080738471s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.048435509 +0000 UTC m=+105.562893324" watchObservedRunningTime="2026-02-18 00:10:22.080738471 +0000 UTC m=+105.595196276" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.081163 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=18.081154642 podStartE2EDuration="18.081154642s" podCreationTimestamp="2026-02-18 00:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.079454147 +0000 UTC m=+105.593911902" watchObservedRunningTime="2026-02-18 00:10:22.081154642 +0000 UTC m=+105.595612407" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124498 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124539 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124551 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124569 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124582 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.129002 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.127911798 podStartE2EDuration="19.127911798s" podCreationTimestamp="2026-02-18 00:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.12496584 +0000 UTC m=+105.639423615" watchObservedRunningTime="2026-02-18 00:10:22.127911798 +0000 UTC m=+105.642369583" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227128 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227153 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227168 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.235247 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vsc9f" podStartSLOduration=84.235220481 podStartE2EDuration="1m24.235220481s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.235019346 +0000 UTC m=+105.749477091" watchObservedRunningTime="2026-02-18 00:10:22.235220481 +0000 UTC m=+105.749678206" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.269720 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.269755 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:22 crc kubenswrapper[5121]: E0218 00:10:22.269931 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:22 crc kubenswrapper[5121]: E0218 00:10:22.270051 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.315029 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podStartSLOduration=83.31500638 podStartE2EDuration="1m23.31500638s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.31464049 +0000 UTC m=+105.829098235" watchObservedRunningTime="2026-02-18 00:10:22.31500638 +0000 UTC m=+105.829464105" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329460 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329531 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329547 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329558 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432138 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432211 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432230 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432270 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535014 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535131 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535150 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.636998 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637059 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637091 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739007 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739115 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739134 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739145 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.787799 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"e03bdadfffa5cfdd910932db26b739a5197e0563f32039e91fa14e6a1031c3f0"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.791646 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.791984 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.792028 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.792038 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.823823 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.832091 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841249 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841286 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841299 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841348 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841363 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.844984 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podStartSLOduration=84.844970854 podStartE2EDuration="1m24.844970854s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.843120165 +0000 UTC m=+106.357577920" watchObservedRunningTime="2026-02-18 00:10:22.844970854 +0000 UTC m=+106.359428589" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943883 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943951 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943986 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046598 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046692 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046708 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046729 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046745 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149378 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149448 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149466 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149493 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149511 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252713 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252798 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252842 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.270585 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:23 crc kubenswrapper[5121]: E0218 00:10:23.270802 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.270840 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:23 crc kubenswrapper[5121]: E0218 00:10:23.271066 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355476 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355501 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355518 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458594 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458738 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458757 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561354 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561418 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561437 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561464 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561489 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663839 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663864 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766397 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766477 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766502 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766529 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766551 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.803004 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"c6e1cf4b8e8f8bf8edaa911bf15ccc6c1afae31bcf0a3c9aced7057707efb155"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.808999 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="e03bdadfffa5cfdd910932db26b739a5197e0563f32039e91fa14e6a1031c3f0" exitCode=0 Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.809075 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"e03bdadfffa5cfdd910932db26b739a5197e0563f32039e91fa14e6a1031c3f0"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.869518 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.870062 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.870412 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.870734 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.871012 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974215 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974225 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974241 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974253 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079274 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079345 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079364 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079396 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079417 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.181921 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.181975 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.181987 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.182005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.182021 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.269854 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:24 crc kubenswrapper[5121]: E0218 00:10:24.270134 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.269886 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:24 crc kubenswrapper[5121]: E0218 00:10:24.270609 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285885 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285917 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285928 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285956 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390121 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390174 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390183 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390202 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390212 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.492997 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493072 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493091 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493119 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493138 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596529 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596599 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596629 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596685 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596704 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699331 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699393 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699411 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699424 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802383 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802399 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802420 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802437 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.818992 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="81e434867b21e9bbfc675f454a70822b0a690cbb63fce7c952838ef2ad557b31" exitCode=0 Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.820899 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"81e434867b21e9bbfc675f454a70822b0a690cbb63fce7c952838ef2ad557b31"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905006 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905065 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905085 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905106 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905121 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008406 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008473 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008540 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.112912 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113263 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113295 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.126582 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mlvtl"] Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.126757 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:25 crc kubenswrapper[5121]: E0218 00:10:25.126868 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.215896 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.215979 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.215998 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.216027 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.216046 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.269942 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:25 crc kubenswrapper[5121]: E0218 00:10:25.270117 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.270597 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:25 crc kubenswrapper[5121]: E0218 00:10:25.270736 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318211 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318299 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318323 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318374 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.420715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421895 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421922 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421941 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524616 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524635 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524728 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524749 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627893 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627952 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627995 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.628012 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730534 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730596 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730615 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730641 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730694 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832681 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832709 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832728 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.834933 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"755997e9b414036d2bacb2870115aa879b252238b47a9af329648aa8e97f12fb"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936019 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936079 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936094 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936113 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936125 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038832 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038897 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038911 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038935 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038954 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.141812 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142852 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142943 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142968 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245609 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245769 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245820 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245839 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.270010 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.270081 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:26 crc kubenswrapper[5121]: E0218 00:10:26.270232 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:26 crc kubenswrapper[5121]: E0218 00:10:26.270438 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.348916 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.348994 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.349015 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.349043 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.349062 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451408 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451478 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451497 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451522 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451540 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554044 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554109 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554129 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554157 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554175 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656522 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656622 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656684 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656736 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760168 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760182 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760202 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760216 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836312 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836371 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836431 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.904094 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" podStartSLOduration=88.904058912 podStartE2EDuration="1m28.904058912s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:25.875257281 +0000 UTC m=+109.389715096" watchObservedRunningTime="2026-02-18 00:10:26.904058912 +0000 UTC m=+110.418516687" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.905351 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw"] Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.237862 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.251261 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.336820 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.336868 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.336872 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: E0218 00:10:27.337004 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:27 crc kubenswrapper[5121]: E0218 00:10:27.337390 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.339867 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.340195 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.342999 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.346259 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424265 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424599 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/301c7ba1-7668-44c5-bae1-acad05f92eb5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424802 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/301c7ba1-7668-44c5-bae1-acad05f92eb5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424930 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/301c7ba1-7668-44c5-bae1-acad05f92eb5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.425175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526477 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/301c7ba1-7668-44c5-bae1-acad05f92eb5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/301c7ba1-7668-44c5-bae1-acad05f92eb5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526567 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526609 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526633 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/301c7ba1-7668-44c5-bae1-acad05f92eb5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526767 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526863 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.528026 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/301c7ba1-7668-44c5-bae1-acad05f92eb5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.534745 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/301c7ba1-7668-44c5-bae1-acad05f92eb5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.557610 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/301c7ba1-7668-44c5-bae1-acad05f92eb5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.663149 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: W0218 00:10:27.688320 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod301c7ba1_7668_44c5_bae1_acad05f92eb5.slice/crio-3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7 WatchSource:0}: Error finding container 3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7: Status 404 returned error can't find the container with id 3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7 Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.844339 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" event={"ID":"301c7ba1-7668-44c5-bae1-acad05f92eb5","Type":"ContainerStarted","Data":"3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7"} Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.269718 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.269754 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:28 crc kubenswrapper[5121]: E0218 00:10:28.269963 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:28 crc kubenswrapper[5121]: E0218 00:10:28.270085 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.850041 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" event={"ID":"301c7ba1-7668-44c5-bae1-acad05f92eb5","Type":"ContainerStarted","Data":"5636d14b2bde59863ee496176825e03ce7b2920be8b499564a495e5f220d686f"} Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.875784 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" podStartSLOduration=90.875754103 podStartE2EDuration="1m30.875754103s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:28.875122576 +0000 UTC m=+112.389580351" watchObservedRunningTime="2026-02-18 00:10:28.875754103 +0000 UTC m=+112.390211928" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.132994 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.133281 5121 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.190044 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-422hn"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.194025 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.202226 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.202228 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.202444 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.203727 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.208673 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.208903 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.213058 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.215263 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.215679 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.217381 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.221359 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.221575 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.226456 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.226499 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.226737 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227733 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-trwcb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227738 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227897 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227964 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.228196 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.228390 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.228845 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229078 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229097 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229176 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229773 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.231506 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29522880-hmpf4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.234504 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.234823 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240022 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240203 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240392 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240563 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240711 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240813 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240968 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.241038 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.241152 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.243478 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-hfw2k"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.247192 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jrx99"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.247408 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.247438 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.250669 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.253603 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.254174 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.269767 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.269925 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270009 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270025 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270117 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270152 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270257 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270351 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270432 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270569 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.284348 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.285305 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.286875 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.287592 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.287609 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.287940 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.288095 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.288208 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.289296 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.290575 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.296514 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.306424 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.306663 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.308810 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.308939 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.309770 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.310165 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.310238 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.312871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.314706 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.315296 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.315380 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.315468 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.316060 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.316702 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.317414 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.317795 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.317987 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.318562 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.319887 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320291 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320469 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320681 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320883 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.321486 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.323895 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.325298 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352855 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-encryption-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352891 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352912 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352933 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352951 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-serving-cert\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352971 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-config\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352987 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-encryption-config\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353004 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpvg\" (UniqueName: \"kubernetes.io/projected/e173473c-5d44-44cf-833c-2a88d061dd9f-kube-api-access-bgpvg\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353019 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353036 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353053 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353071 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ee9403-18d0-4528-a3cd-82ea0dba3576-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353086 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxqx\" (UniqueName: \"kubernetes.io/projected/62cfebd6-02c7-4437-9be3-60aec3d91f1b-kube-api-access-nfxqx\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353102 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353118 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353134 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2bk\" (UniqueName: \"kubernetes.io/projected/18ee9403-18d0-4528-a3cd-82ea0dba3576-kube-api-access-9s2bk\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353150 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353166 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/62cfebd6-02c7-4437-9be3-60aec3d91f1b-tmp-dir\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353182 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-serving-cert\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353197 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353218 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353240 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353272 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-client\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353288 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353311 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/005aa352-e543-4bfd-ba57-b2cb37eb98f6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353326 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-image-import-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353341 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353363 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353379 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353395 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353411 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353426 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-service-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353440 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353463 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353481 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-client\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353496 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353521 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353548 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353564 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-serving-cert\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353579 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353596 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cml8m\" (UniqueName: \"kubernetes.io/projected/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-kube-api-access-cml8m\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353611 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353627 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353644 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-images\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353678 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353695 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ee9403-18d0-4528-a3cd-82ea0dba3576-config\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353710 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-client\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353768 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-config\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353799 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-dir\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353895 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353936 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ff9m\" (UniqueName: \"kubernetes.io/projected/005aa352-e543-4bfd-ba57-b2cb37eb98f6-kube-api-access-5ff9m\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353980 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354033 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit-dir\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354097 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-policies\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354121 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354159 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354198 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354220 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354248 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.418776 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qmtl4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.429213 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zvwwb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.429354 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.429740 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.431694 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.433314 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.433639 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435257 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435400 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435485 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435545 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435629 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435700 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435556 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.436875 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.437083 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.439908 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.440077 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.443007 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.445272 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.445957 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.445983 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.446160 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.446342 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.446786 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.448267 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2wj9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.448363 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.448577 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.449004 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.451707 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.451928 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.457818 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.459097 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.460415 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.460615 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461153 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461300 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461474 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461758 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.462255 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.464291 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465571 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465619 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465662 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465765 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-encryption-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465800 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465819 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465879 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-serving-cert\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465931 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-config\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465960 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-encryption-config\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465982 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpvg\" (UniqueName: \"kubernetes.io/projected/e173473c-5d44-44cf-833c-2a88d061dd9f-kube-api-access-bgpvg\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466005 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466056 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466118 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ee9403-18d0-4528-a3cd-82ea0dba3576-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466148 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxqx\" (UniqueName: \"kubernetes.io/projected/62cfebd6-02c7-4437-9be3-60aec3d91f1b-kube-api-access-nfxqx\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466222 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466211 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466253 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466315 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2bk\" (UniqueName: \"kubernetes.io/projected/18ee9403-18d0-4528-a3cd-82ea0dba3576-kube-api-access-9s2bk\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466404 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466534 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.467453 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/62cfebd6-02c7-4437-9be3-60aec3d91f1b-tmp-dir\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.467535 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-serving-cert\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.467588 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.468584 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.470212 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.470912 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-config\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.471803 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472203 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472344 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472712 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472892 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.473546 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/62cfebd6-02c7-4437-9be3-60aec3d91f1b-tmp-dir\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.473587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.474174 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.474449 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-encryption-config\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.474522 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475023 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-encryption-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475076 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475248 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-client\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475362 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475397 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/005aa352-e543-4bfd-ba57-b2cb37eb98f6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475465 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-image-import-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475505 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475525 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.476041 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-serving-cert\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.476353 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.478622 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.478737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479000 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479044 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-service-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479076 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479467 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479515 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-client\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479541 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479585 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479626 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479662 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-serving-cert\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479681 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479709 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cml8m\" (UniqueName: \"kubernetes.io/projected/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-kube-api-access-cml8m\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479748 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479767 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-images\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479815 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ee9403-18d0-4528-a3cd-82ea0dba3576-config\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479882 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-client\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479906 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-config\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479924 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-dir\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479946 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ff9m\" (UniqueName: \"kubernetes.io/projected/005aa352-e543-4bfd-ba57-b2cb37eb98f6-kube-api-access-5ff9m\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479989 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480014 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit-dir\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480037 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-policies\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480055 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480073 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480195 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480090 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480770 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482208 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-service-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482232 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482528 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482832 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ee9403-18d0-4528-a3cd-82ea0dba3576-config\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483147 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483699 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483705 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483703 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483762 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-dir\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484052 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484101 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit-dir\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484246 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-policies\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484519 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-images\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484627 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-config\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484807 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.485101 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.485290 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.485414 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486194 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-serving-cert\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-image-import-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486568 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486595 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-client\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486998 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-client\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487795 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/005aa352-e543-4bfd-ba57-b2cb37eb98f6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487804 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.488878 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p8ssx"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.489965 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.490744 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.491309 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-serving-cert\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.492928 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-client\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.494542 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ee9403-18d0-4528-a3cd-82ea0dba3576-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.495268 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.495400 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.496697 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.500591 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.500779 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.502062 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.505762 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.505917 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.510508 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.510621 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.513012 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.513095 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.515297 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.515475 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.516313 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.518761 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7b8sg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.518879 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.520942 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.520964 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.521072 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.524008 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.524111 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.526129 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.528349 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-mkw5h"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.528399 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.529995 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.536956 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.539842 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.539947 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.543017 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.543441 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.545775 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-vlht9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.546225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.549086 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.549118 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.549361 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.552567 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.552745 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.556469 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.558393 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.558521 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.561368 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.561390 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.561525 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.563529 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-mvs4c"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.563892 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.566567 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.566715 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569323 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-trwcb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569344 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-422hn"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569354 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-hmpf4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569363 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qmtl4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569371 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zvwwb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569380 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569389 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569397 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569406 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569418 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569429 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7b8sg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569442 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569451 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-hfw2k"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569460 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569468 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jrx99"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569477 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569495 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569498 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vn45p"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.571914 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v9jcr"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.572048 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.576945 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.577685 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.577736 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rsbpp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.577756 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.581119 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h64q4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.581228 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583451 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583470 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583480 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2wj9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583489 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-vlht9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583501 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583521 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583533 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583581 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p8ssx"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583592 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583603 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-mkw5h"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583611 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583622 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583630 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583646 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583669 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583696 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583707 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v9jcr"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583716 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h64q4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583729 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rsbpp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583743 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586189 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586381 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.585513 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586666 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69df6480-3d02-4112-b8db-3507dd5a5f49-serving-cert\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586687 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69df6480-3d02-4112-b8db-3507dd5a5f49-kube-api-access\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586710 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-trusted-ca\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e287aff-1485-4233-8648-ece2622ccf37-tmp-dir\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586745 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0d1702-8700-443c-9bf2-afa4222bd41c-config\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586769 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-serving-cert\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-config\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586863 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586925 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-config\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586943 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0720e131-2f16-4741-bef5-fa81e51085a8-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586978 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0720e131-2f16-4741-bef5-fa81e51085a8-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586997 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25sc\" (UniqueName: \"kubernetes.io/projected/bbdb0e57-487f-44df-bfea-01e173ebb1e3-kube-api-access-f25sc\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587012 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srpv\" (UniqueName: \"kubernetes.io/projected/5e287aff-1485-4233-8648-ece2622ccf37-kube-api-access-2srpv\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587027 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c0d1702-8700-443c-9bf2-afa4222bd41c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587044 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdb0e57-487f-44df-bfea-01e173ebb1e3-serving-cert\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587058 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0d1702-8700-443c-9bf2-afa4222bd41c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587075 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtxj\" (UniqueName: \"kubernetes.io/projected/9c0d1702-8700-443c-9bf2-afa4222bd41c-kube-api-access-smtxj\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587122 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49d45bda-ec47-407b-b527-c7267c3825c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587270 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-available-featuregates\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587289 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e287aff-1485-4233-8648-ece2622ccf37-metrics-tls\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587304 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69df6480-3d02-4112-b8db-3507dd5a5f49-config\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587320 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0720e131-2f16-4741-bef5-fa81e51085a8-config\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587336 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjf5\" (UniqueName: \"kubernetes.io/projected/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-kube-api-access-mfjf5\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587388 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3597721-7184-4c2a-8050-ccec6fa345e4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587428 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0720e131-2f16-4741-bef5-fa81e51085a8-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587509 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587540 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjcq\" (UniqueName: \"kubernetes.io/projected/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-kube-api-access-7zjcq\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587558 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8gh7\" (UniqueName: \"kubernetes.io/projected/a3597721-7184-4c2a-8050-ccec6fa345e4-kube-api-access-h8gh7\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587586 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587602 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587670 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-serving-cert\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587687 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmm9\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-kube-api-access-8wmm9\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587717 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69df6480-3d02-4112-b8db-3507dd5a5f49-tmp-dir\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587732 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49d45bda-ec47-407b-b527-c7267c3825c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587759 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587889 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.596138 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.616848 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.636133 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.674596 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpvg\" (UniqueName: \"kubernetes.io/projected/e173473c-5d44-44cf-833c-2a88d061dd9f-kube-api-access-bgpvg\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689356 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689635 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-tmpfs\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689828 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49d45bda-ec47-407b-b527-c7267c3825c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689970 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnfg\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-kube-api-access-shnfg\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690135 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a8987a-ee46-4b59-b949-55032c182585-tmpfs\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690287 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690577 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690723 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69df6480-3d02-4112-b8db-3507dd5a5f49-serving-cert\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690872 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49d45bda-ec47-407b-b527-c7267c3825c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691154 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69df6480-3d02-4112-b8db-3507dd5a5f49-kube-api-access\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691289 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691413 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cab190f-d97b-45f5-8875-eb96fc357e91-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691526 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691625 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-serving-cert\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691743 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-config\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691866 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692071 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692218 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/efe976a0-6ea6-4283-8b7c-97caa4f2111b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692403 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxcvj\" (UniqueName: \"kubernetes.io/projected/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-kube-api-access-jxcvj\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692635 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f25sc\" (UniqueName: \"kubernetes.io/projected/bbdb0e57-487f-44df-bfea-01e173ebb1e3-kube-api-access-f25sc\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692753 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2srpv\" (UniqueName: \"kubernetes.io/projected/5e287aff-1485-4233-8648-ece2622ccf37-kube-api-access-2srpv\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692852 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chw74\" (UniqueName: \"kubernetes.io/projected/4cab190f-d97b-45f5-8875-eb96fc357e91-kube-api-access-chw74\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693116 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-config\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692961 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693297 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693425 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0720e131-2f16-4741-bef5-fa81e51085a8-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693550 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c0d1702-8700-443c-9bf2-afa4222bd41c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693695 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49d45bda-ec47-407b-b527-c7267c3825c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694253 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmqm\" (UniqueName: \"kubernetes.io/projected/efe976a0-6ea6-4283-8b7c-97caa4f2111b-kube-api-access-kfmqm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694382 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694526 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693877 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c0d1702-8700-443c-9bf2-afa4222bd41c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e287aff-1485-4233-8648-ece2622ccf37-metrics-tls\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695201 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695349 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-mountpoint-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695439 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695543 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0720e131-2f16-4741-bef5-fa81e51085a8-config\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695688 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa83ca9d-be38-4710-ace7-571b9e8b43dc-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695878 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-webhook-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695981 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lhr\" (UniqueName: \"kubernetes.io/projected/1c0a3ab2-4ddb-4472-af47-3471a18714be-kube-api-access-l4lhr\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696230 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696504 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgwqw\" (UniqueName: \"kubernetes.io/projected/aa83ca9d-be38-4710-ace7-571b9e8b43dc-kube-api-access-vgwqw\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696585 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696703 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696766 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbsm\" (UniqueName: \"kubernetes.io/projected/6d918a65-a99e-41a8-97de-51c2cc74b24b-kube-api-access-pvbsm\") pod \"downloads-747b44746d-mkw5h\" (UID: \"6d918a65-a99e-41a8-97de-51c2cc74b24b\") " pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696850 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696919 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vnz\" (UniqueName: \"kubernetes.io/projected/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-kube-api-access-z7vnz\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696958 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-serving-cert\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697023 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmm9\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-kube-api-access-8wmm9\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697081 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697108 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697170 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69df6480-3d02-4112-b8db-3507dd5a5f49-tmp-dir\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697200 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697263 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-service-ca\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697286 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-apiservice-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697330 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-oauth-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697356 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1c0a3ab2-4ddb-4472-af47-3471a18714be-tmpfs\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697401 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-trusted-ca-bundle\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697422 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697449 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-oauth-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697498 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstdf\" (UniqueName: \"kubernetes.io/projected/b46e61bd-a38a-4792-98ee-067e427538c9-kube-api-access-wstdf\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697522 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-profile-collector-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697563 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cab190f-d97b-45f5-8875-eb96fc357e91-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697611 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697680 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697705 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wxwv\" (UniqueName: \"kubernetes.io/projected/db5b1911-47a0-41f1-b793-924df4056e20-kube-api-access-8wxwv\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697793 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-trusted-ca\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697812 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e287aff-1485-4233-8648-ece2622ccf37-tmp-dir\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697930 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0d1702-8700-443c-9bf2-afa4222bd41c-config\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697962 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.698361 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69df6480-3d02-4112-b8db-3507dd5a5f49-tmp-dir\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.698973 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa83ca9d-be38-4710-ace7-571b9e8b43dc-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699124 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699234 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wj4v\" (UniqueName: \"kubernetes.io/projected/21a8987a-ee46-4b59-b949-55032c182585-kube-api-access-7wj4v\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699313 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wf2\" (UniqueName: \"kubernetes.io/projected/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-kube-api-access-r7wf2\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699397 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kxpp\" (UniqueName: \"kubernetes.io/projected/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-kube-api-access-5kxpp\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699475 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46e61bd-a38a-4792-98ee-067e427538c9-tmp-dir\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699584 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-config\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699195 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e287aff-1485-4233-8648-ece2622ccf37-tmp-dir\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699695 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdb0e57-487f-44df-bfea-01e173ebb1e3-serving-cert\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699788 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0d1702-8700-443c-9bf2-afa4222bd41c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-available-featuregates\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700197 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700225 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0720e131-2f16-4741-bef5-fa81e51085a8-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700256 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-available-featuregates\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700282 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700304 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700348 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700297 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-trusted-ca\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700383 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smtxj\" (UniqueName: \"kubernetes.io/projected/9c0d1702-8700-443c-9bf2-afa4222bd41c-kube-api-access-smtxj\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700449 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqzj\" (UniqueName: \"kubernetes.io/projected/4ead99f6-fe0b-418e-b25c-06d177458b2a-kube-api-access-gbqzj\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700585 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0720e131-2f16-4741-bef5-fa81e51085a8-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700690 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700716 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxl6\" (UniqueName: \"kubernetes.io/projected/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-kube-api-access-wzxl6\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700746 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69df6480-3d02-4112-b8db-3507dd5a5f49-config\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700764 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-registration-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700784 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700802 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfjf5\" (UniqueName: \"kubernetes.io/projected/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-kube-api-access-mfjf5\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700822 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700917 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700948 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700986 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3597721-7184-4c2a-8050-ccec6fa345e4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701011 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr95z\" (UniqueName: \"kubernetes.io/projected/a41b6648-bba2-4f34-b49b-f95db5ff9426-kube-api-access-sr95z\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701033 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701057 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prxr\" (UniqueName: \"kubernetes.io/projected/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-kube-api-access-8prxr\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701082 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701119 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0720e131-2f16-4741-bef5-fa81e51085a8-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701162 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-plugins-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701188 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701247 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-socket-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701271 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-csi-data-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701292 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-images\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjcq\" (UniqueName: \"kubernetes.io/projected/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-kube-api-access-7zjcq\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701683 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e287aff-1485-4233-8648-ece2622ccf37-metrics-tls\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701725 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8gh7\" (UniqueName: \"kubernetes.io/projected/a3597721-7184-4c2a-8050-ccec6fa345e4-kube-api-access-h8gh7\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701886 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702271 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69df6480-3d02-4112-b8db-3507dd5a5f49-serving-cert\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702304 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69df6480-3d02-4112-b8db-3507dd5a5f49-config\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702406 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49d45bda-ec47-407b-b527-c7267c3825c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702525 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-serving-cert\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702600 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.703935 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0d1702-8700-443c-9bf2-afa4222bd41c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.705170 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdb0e57-487f-44df-bfea-01e173ebb1e3-serving-cert\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.705878 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3597721-7184-4c2a-8050-ccec6fa345e4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.711453 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0d1702-8700-443c-9bf2-afa4222bd41c-config\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.714565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2bk\" (UniqueName: \"kubernetes.io/projected/18ee9403-18d0-4528-a3cd-82ea0dba3576-kube-api-access-9s2bk\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.732142 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxqx\" (UniqueName: \"kubernetes.io/projected/62cfebd6-02c7-4437-9be3-60aec3d91f1b-kube-api-access-nfxqx\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.736431 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.741353 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-config\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.756523 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.759516 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.777352 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.803812 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-plugins-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804283 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804519 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804787 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-socket-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805001 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-csi-data-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805176 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-images\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805134 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-csi-data-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804958 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-socket-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804245 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-plugins-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805686 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805847 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805989 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-tmpfs\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.806118 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-shnfg\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-kube-api-access-shnfg\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.806327 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a8987a-ee46-4b59-b949-55032c182585-tmpfs\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.806570 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-tmpfs\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807177 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a8987a-ee46-4b59-b949-55032c182585-tmpfs\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807246 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807620 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807818 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cab190f-d97b-45f5-8875-eb96fc357e91-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807943 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808093 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808333 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/efe976a0-6ea6-4283-8b7c-97caa4f2111b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808347 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808759 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxcvj\" (UniqueName: \"kubernetes.io/projected/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-kube-api-access-jxcvj\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809048 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-chw74\" (UniqueName: \"kubernetes.io/projected/4cab190f-d97b-45f5-8875-eb96fc357e91-kube-api-access-chw74\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809261 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809416 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmqm\" (UniqueName: \"kubernetes.io/projected/efe976a0-6ea6-4283-8b7c-97caa4f2111b-kube-api-access-kfmqm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809631 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809828 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809953 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-mountpoint-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810066 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-mountpoint-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810069 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810151 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa83ca9d-be38-4710-ace7-571b9e8b43dc-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810172 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810199 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-webhook-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810227 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lhr\" (UniqueName: \"kubernetes.io/projected/1c0a3ab2-4ddb-4472-af47-3471a18714be-kube-api-access-l4lhr\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810260 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810292 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810342 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgwqw\" (UniqueName: \"kubernetes.io/projected/aa83ca9d-be38-4710-ace7-571b9e8b43dc-kube-api-access-vgwqw\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810379 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810410 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810434 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbsm\" (UniqueName: \"kubernetes.io/projected/6d918a65-a99e-41a8-97de-51c2cc74b24b-kube-api-access-pvbsm\") pod \"downloads-747b44746d-mkw5h\" (UID: \"6d918a65-a99e-41a8-97de-51c2cc74b24b\") " pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810472 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vnz\" (UniqueName: \"kubernetes.io/projected/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-kube-api-access-z7vnz\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810498 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810516 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810547 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810567 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-service-ca\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810587 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-apiservice-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810612 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-oauth-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810629 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1c0a3ab2-4ddb-4472-af47-3471a18714be-tmpfs\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810685 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-trusted-ca-bundle\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-oauth-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810760 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wstdf\" (UniqueName: \"kubernetes.io/projected/b46e61bd-a38a-4792-98ee-067e427538c9-kube-api-access-wstdf\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810781 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-profile-collector-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810808 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cab190f-d97b-45f5-8875-eb96fc357e91-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810835 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810864 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wxwv\" (UniqueName: \"kubernetes.io/projected/db5b1911-47a0-41f1-b793-924df4056e20-kube-api-access-8wxwv\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810949 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa83ca9d-be38-4710-ace7-571b9e8b43dc-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810994 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wj4v\" (UniqueName: \"kubernetes.io/projected/21a8987a-ee46-4b59-b949-55032c182585-kube-api-access-7wj4v\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811022 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wf2\" (UniqueName: \"kubernetes.io/projected/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-kube-api-access-r7wf2\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811044 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kxpp\" (UniqueName: \"kubernetes.io/projected/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-kube-api-access-5kxpp\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811070 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46e61bd-a38a-4792-98ee-067e427538c9-tmp-dir\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811112 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811144 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811167 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811191 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811216 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbqzj\" (UniqueName: \"kubernetes.io/projected/4ead99f6-fe0b-418e-b25c-06d177458b2a-kube-api-access-gbqzj\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811216 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811264 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811286 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxl6\" (UniqueName: \"kubernetes.io/projected/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-kube-api-access-wzxl6\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811309 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-registration-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811330 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811356 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811515 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-registration-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811600 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811974 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.812469 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813001 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46e61bd-a38a-4792-98ee-067e427538c9-tmp-dir\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813148 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813173 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813201 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sr95z\" (UniqueName: \"kubernetes.io/projected/a41b6648-bba2-4f34-b49b-f95db5ff9426-kube-api-access-sr95z\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813221 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813240 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8prxr\" (UniqueName: \"kubernetes.io/projected/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-kube-api-access-8prxr\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813259 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813440 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1c0a3ab2-4ddb-4472-af47-3471a18714be-tmpfs\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.814241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cab190f-d97b-45f5-8875-eb96fc357e91-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.815216 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.815287 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.819471 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-profile-collector-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.836020 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ff9m\" (UniqueName: \"kubernetes.io/projected/005aa352-e543-4bfd-ba57-b2cb37eb98f6-kube-api-access-5ff9m\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.852200 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.852349 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.873244 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.876855 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cml8m\" (UniqueName: \"kubernetes.io/projected/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-kube-api-access-cml8m\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.904535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.911332 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.915192 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.925923 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.933498 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.938021 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.938109 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.946872 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.951123 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-serving-cert\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.959062 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.968544 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.977014 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:29.998106 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.008140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0720e131-2f16-4741-bef5-fa81e51085a8-config\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.017716 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.038908 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.050140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0720e131-2f16-4741-bef5-fa81e51085a8-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.059295 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.078285 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.098488 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.108767 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.114567 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.114910 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.116371 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.142314 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.144304 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ee9403_18d0_4528_a3cd_82ea0dba3576.slice/crio-9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14 WatchSource:0}: Error finding container 9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14: Status 404 returned error can't find the container with id 9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.149045 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.163225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.173364 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.188108 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.188215 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa83ca9d-be38-4710-ace7-571b9e8b43dc-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.193473 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa83ca9d-be38-4710-ace7-571b9e8b43dc-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.196898 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.219405 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.228706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-apiservice-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.230133 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-webhook-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.240930 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.254593 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jrx99"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.257098 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.266743 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-trwcb"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.268162 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/efe976a0-6ea6-4283-8b7c-97caa4f2111b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.269989 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.270255 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.280013 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.298227 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.322093 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.331035 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-hfw2k"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.336511 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.343030 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.344712 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005aa352_e543_4bfd_ba57_b2cb37eb98f6.slice/crio-f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6 WatchSource:0}: Error finding container f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6: Status 404 returned error can't find the container with id f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.350322 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-hmpf4"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.362200 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.373125 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.377073 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.387731 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.396825 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.409746 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.420748 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.425592 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec21d65e_1eab_42a8_bb64_e6f9ba7b5c69.slice/crio-b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed WatchSource:0}: Error finding container b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed: Status 404 returned error can't find the container with id b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.429454 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc530ba0_1249_4787_8584_22f866581116.slice/crio-8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8 WatchSource:0}: Error finding container 8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8: Status 404 returned error can't find the container with id 8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.438236 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.440668 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-422hn"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.458871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.468732 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa50e1e_3367_4e1b_93fb_aea8f3220c81.slice/crio-b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8 WatchSource:0}: Error finding container b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8: Status 404 returned error can't find the container with id b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.482536 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.498054 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.509498 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.516704 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.533438 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-oauth-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.539057 5121 request.go:752] "Waited before sending request" delay="1.017277874s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.547986 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.555711 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.562755 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.580876 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.587190 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-service-ca\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.606597 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.615264 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-trusted-ca-bundle\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.623985 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.634553 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-oauth-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.641156 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.660375 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.672525 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cab190f-d97b-45f5-8875-eb96fc357e91-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.678449 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.696740 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.719454 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.737317 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.758661 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.774247 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.776208 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.786388 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-images\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.796297 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804574 5121 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804700 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca podName:cad52ef7-8080-48a2-91e3-5bcfc007b196 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.304676382 +0000 UTC m=+114.819134117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-78c6t" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804779 5121 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804884 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.304861547 +0000 UTC m=+114.819319282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.806831 5121 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.806872 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert podName:1c378e40-50b9-49d3-bbdf-f9cc1e6baaac nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.306863821 +0000 UTC m=+114.821321546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert") pod "ingress-canary-h64q4" (UID: "1c378e40-50b9-49d3-bbdf-f9cc1e6baaac") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.806945 5121 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.807123 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config podName:4ead99f6-fe0b-418e-b25c-06d177458b2a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.307093677 +0000 UTC m=+114.821551412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config") pod "machine-approver-54c688565-jxkj2" (UID: "4ead99f6-fe0b-418e-b25c-06d177458b2a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.807981 5121 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.808019 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.308011952 +0000 UTC m=+114.822469677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.808055 5121 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.808089 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert podName:4b1e56fa-e38b-48bc-9768-0bc82aca0a0c nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.308080824 +0000 UTC m=+114.822538559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-zsz4p" (UID: "4b1e56fa-e38b-48bc-9768-0bc82aca0a0c") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.809941 5121 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.809972 5121 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.809991 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.309982174 +0000 UTC m=+114.824439909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.810048 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics podName:cad52ef7-8080-48a2-91e3-5bcfc007b196 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.310030265 +0000 UTC m=+114.824488000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-78c6t" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812449 5121 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812491 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config podName:0d3e4d34-c74d-4572-aca8-da4c6c85fa79 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.312483011 +0000 UTC m=+114.826940746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config") pod "openshift-kube-scheduler-operator-54f497555d-km69x" (UID: "0d3e4d34-c74d-4572-aca8-da4c6c85fa79") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812524 5121 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812570 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls podName:b46e61bd-a38a-4792-98ee-067e427538c9 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.312560913 +0000 UTC m=+114.827018648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls") pod "dns-default-rsbpp" (UID: "b46e61bd-a38a-4792-98ee-067e427538c9") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812597 5121 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812618 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume podName:b46e61bd-a38a-4792-98ee-067e427538c9 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.312611924 +0000 UTC m=+114.827069659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume") pod "dns-default-rsbpp" (UID: "b46e61bd-a38a-4792-98ee-067e427538c9") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813698 5121 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813736 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle podName:db5b1911-47a0-41f1-b793-924df4056e20 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313729064 +0000 UTC m=+114.828186799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle") pod "service-ca-74545575db-vlht9" (UID: "db5b1911-47a0-41f1-b793-924df4056e20") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813755 5121 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813774 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key podName:db5b1911-47a0-41f1-b793-924df4056e20 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313769666 +0000 UTC m=+114.828227401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key") pod "service-ca-74545575db-vlht9" (UID: "db5b1911-47a0-41f1-b793-924df4056e20") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813787 5121 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813806 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist podName:9b4e56ad-da89-4541-842d-17ba2d9bcb0a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313801436 +0000 UTC m=+114.828259171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-jc5sl" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813837 5121 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813856 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313851628 +0000 UTC m=+114.828309363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813870 5121 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813890 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert podName:0d3e4d34-c74d-4572-aca8-da4c6c85fa79 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313884188 +0000 UTC m=+114.828341923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert") pod "openshift-kube-scheduler-operator-54f497555d-km69x" (UID: "0d3e4d34-c74d-4572-aca8-da4c6c85fa79") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813905 5121 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813933 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls podName:4ead99f6-fe0b-418e-b25c-06d177458b2a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313920959 +0000 UTC m=+114.828378694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls") pod "machine-approver-54c688565-jxkj2" (UID: "4ead99f6-fe0b-418e-b25c-06d177458b2a") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813947 5121 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813966 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config podName:4ead99f6-fe0b-418e-b25c-06d177458b2a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.31396142 +0000 UTC m=+114.828419155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config") pod "machine-approver-54c688565-jxkj2" (UID: "4ead99f6-fe0b-418e-b25c-06d177458b2a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813979 5121 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.814003 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert podName:a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313997001 +0000 UTC m=+114.828454726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert") pod "catalog-operator-75ff9f647d-wwrwg" (UID: "a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.814489 5121 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.814528 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert podName:21a8987a-ee46-4b59-b949-55032c182585 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.314521235 +0000 UTC m=+114.828978970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert") pod "olm-operator-5cdf44d969-htdrd" (UID: "21a8987a-ee46-4b59-b949-55032c182585") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.815931 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.838326 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.856554 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.860042 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerStarted","Data":"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.860096 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerStarted","Data":"b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.860925 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863087 5121 generic.go:358] "Generic (PLEG): container finished" podID="e173473c-5d44-44cf-833c-2a88d061dd9f" containerID="ad8a18db20601067d45dfc8d825312457a4a229c9120ff9c3d1fce49c153e941" exitCode=0 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863175 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" event={"ID":"e173473c-5d44-44cf-833c-2a88d061dd9f","Type":"ContainerDied","Data":"ad8a18db20601067d45dfc8d825312457a4a229c9120ff9c3d1fce49c153e941"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863177 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-x8c88 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863203 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" event={"ID":"e173473c-5d44-44cf-833c-2a88d061dd9f","Type":"ContainerStarted","Data":"479a641ee1fca10577a89b42bb5fc7f8cffb119bef0ce8c1fb130d452b8c6f86"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863258 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.866411 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" event={"ID":"005aa352-e543-4bfd-ba57-b2cb37eb98f6","Type":"ContainerStarted","Data":"7eb10c7f12c4c12fc6b146ae07090a3f53a77866aae9b926e9eff03e5f015a83"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.866461 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" event={"ID":"005aa352-e543-4bfd-ba57-b2cb37eb98f6","Type":"ContainerStarted","Data":"997b09ce15268eb52254feb45e352b0775a5b304238dd427e95511fadc30437f"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.866477 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" event={"ID":"005aa352-e543-4bfd-ba57-b2cb37eb98f6","Type":"ContainerStarted","Data":"f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.869022 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" event={"ID":"62cfebd6-02c7-4437-9be3-60aec3d91f1b","Type":"ContainerStarted","Data":"e5102d319ba62a16ecff518a9d80d5153a5bbf1e8072c11bbdae5c28c3135a87"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.869076 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" event={"ID":"62cfebd6-02c7-4437-9be3-60aec3d91f1b","Type":"ContainerStarted","Data":"60a6eb0ef06edadd6ed1792649a499a5f18a56a5b6dba711cb4774df3958f0c6"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.870608 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerStarted","Data":"c763fd6dfa3e272df9c90c9104d067c6998b90e0c16d5d9f5c113fd96ac3d234"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.870643 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerStarted","Data":"3687564e37fbbf3ead5e98e35201f7bb38d703cba012611a2342fb57cfe0c5c0"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.872055 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" event={"ID":"18ee9403-18d0-4528-a3cd-82ea0dba3576","Type":"ContainerStarted","Data":"6ae9c66a898663b3f60f7204d34e9d3068688099ce5cad9cfd3c1cdacf23426e"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.872112 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" event={"ID":"18ee9403-18d0-4528-a3cd-82ea0dba3576","Type":"ContainerStarted","Data":"9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.876727 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerStarted","Data":"b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.877368 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.881611 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerStarted","Data":"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.881700 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerStarted","Data":"7bc05f9957f09f27cee7504d54470ecd9c12fb4c5e2801caea1078ac4942d85e"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.881729 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.883899 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-m7q6l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.883992 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.885925 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerStarted","Data":"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.886322 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerStarted","Data":"8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.886351 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.887461 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-w48qb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.887501 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.896788 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.921990 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.937135 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.957467 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.976718 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.996942 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.016435 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.037252 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.057026 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.077094 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.097677 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.117442 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.138333 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.157752 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.187152 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.198088 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.217045 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.236133 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.256985 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.276752 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.298837 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.323464 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.338185 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349146 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349288 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349342 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349451 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349499 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349731 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349796 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349931 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350063 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350125 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350156 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350212 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350238 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350265 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350282 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350311 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350384 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.351200 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.351752 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.352560 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.352940 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.353440 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.355416 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.356276 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.356938 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.356771 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.357839 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.358482 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.360775 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.376418 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.388744 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.401314 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.416468 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.437619 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.447632 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.457336 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.464288 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.477443 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.484935 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.501440 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.516470 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.516777 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.537335 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.554516 5121 request.go:752] "Waited before sending request" delay="1.982224412s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.557898 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.576839 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.597031 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.617111 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.637257 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.657510 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.661565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.677586 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.683605 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.697342 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.701331 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.716685 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.736882 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.760048 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.765502 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.776674 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.855456 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.874743 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69df6480-3d02-4112-b8db-3507dd5a5f49-kube-api-access\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.892064 5121 generic.go:358] "Generic (PLEG): container finished" podID="4fa50e1e-3367-4e1b-93fb-aea8f3220c81" containerID="d046876f99826d3366167a71fbd936d5cf246d1cf90adccb1d510e846482c883" exitCode=0 Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.892117 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerDied","Data":"d046876f99826d3366167a71fbd936d5cf246d1cf90adccb1d510e846482c883"} Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.895192 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" event={"ID":"e173473c-5d44-44cf-833c-2a88d061dd9f","Type":"ContainerStarted","Data":"0d22ca1938f30c62be398f37a97feb86853ed24566690cb1f3b80662ae71ea89"} Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.903202 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25sc\" (UniqueName: \"kubernetes.io/projected/bbdb0e57-487f-44df-bfea-01e173ebb1e3-kube-api-access-f25sc\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.936621 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.936938 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srpv\" (UniqueName: \"kubernetes.io/projected/5e287aff-1485-4233-8648-ece2622ccf37-kube-api-access-2srpv\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.940407 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.953049 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.959465 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.963019 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmm9\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-kube-api-access-8wmm9\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.975413 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0720e131-2f16-4741-bef5-fa81e51085a8-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.001295 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfjf5\" (UniqueName: \"kubernetes.io/projected/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-kube-api-access-mfjf5\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.018293 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtxj\" (UniqueName: \"kubernetes.io/projected/9c0d1702-8700-443c-9bf2-afa4222bd41c-kube-api-access-smtxj\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.029730 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.034917 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjcq\" (UniqueName: \"kubernetes.io/projected/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-kube-api-access-7zjcq\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.052321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8gh7\" (UniqueName: \"kubernetes.io/projected/a3597721-7184-4c2a-8050-ccec6fa345e4-kube-api-access-h8gh7\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.073363 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnfg\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-kube-api-access-shnfg\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.100153 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxcvj\" (UniqueName: \"kubernetes.io/projected/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-kube-api-access-jxcvj\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.121850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-chw74\" (UniqueName: \"kubernetes.io/projected/4cab190f-d97b-45f5-8875-eb96fc357e91-kube-api-access-chw74\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.134082 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmqm\" (UniqueName: \"kubernetes.io/projected/efe976a0-6ea6-4283-8b7c-97caa4f2111b-kube-api-access-kfmqm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.156516 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.164911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.170361 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.170697 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.177102 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.199136 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbqzj\" (UniqueName: \"kubernetes.io/projected/4ead99f6-fe0b-418e-b25c-06d177458b2a-kube-api-access-gbqzj\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.204875 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lhr\" (UniqueName: \"kubernetes.io/projected/1c0a3ab2-4ddb-4472-af47-3471a18714be-kube-api-access-l4lhr\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.224971 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.230228 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.233991 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgwqw\" (UniqueName: \"kubernetes.io/projected/aa83ca9d-be38-4710-ace7-571b9e8b43dc-kube-api-access-vgwqw\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.249589 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxl6\" (UniqueName: \"kubernetes.io/projected/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-kube-api-access-wzxl6\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.273830 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.274194 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.299375 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.304017 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.304268 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.310193 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbsm\" (UniqueName: \"kubernetes.io/projected/6d918a65-a99e-41a8-97de-51c2cc74b24b-kube-api-access-pvbsm\") pod \"downloads-747b44746d-mkw5h\" (UID: \"6d918a65-a99e-41a8-97de-51c2cc74b24b\") " pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.312513 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.314281 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.337312 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.345502 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vnz\" (UniqueName: \"kubernetes.io/projected/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-kube-api-access-z7vnz\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.345811 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.357412 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wj4v\" (UniqueName: \"kubernetes.io/projected/21a8987a-ee46-4b59-b949-55032c182585-kube-api-access-7wj4v\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.374857 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.389613 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.396927 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wf2\" (UniqueName: \"kubernetes.io/projected/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-kube-api-access-r7wf2\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.397411 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kxpp\" (UniqueName: \"kubernetes.io/projected/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-kube-api-access-5kxpp\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.413472 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wxwv\" (UniqueName: \"kubernetes.io/projected/db5b1911-47a0-41f1-b793-924df4056e20-kube-api-access-8wxwv\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.413747 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.416247 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.421774 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.422282 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.431952 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.438404 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.446564 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.462527 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.463899 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2wj9"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.464614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8prxr\" (UniqueName: \"kubernetes.io/projected/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-kube-api-access-8prxr\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.467993 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr95z\" (UniqueName: \"kubernetes.io/projected/a41b6648-bba2-4f34-b49b-f95db5ff9426-kube-api-access-sr95z\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.472508 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.480305 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstdf\" (UniqueName: \"kubernetes.io/projected/b46e61bd-a38a-4792-98ee-067e427538c9-kube-api-access-wstdf\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.486819 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.502367 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.518499 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.522922 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.539172 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:32 crc kubenswrapper[5121]: W0218 00:10:32.542165 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69df6480_3d02_4112_b8db_3507dd5a5f49.slice/crio-bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363 WatchSource:0}: Error finding container bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363: Status 404 returned error can't find the container with id bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363 Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.550834 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585738 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c318bc6-d06b-45e4-a256-a74767b40a60-serving-cert\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585808 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585826 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585850 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585870 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpcv4\" (UniqueName: \"kubernetes.io/projected/38e2fa84-50e3-4aa5-9269-6e423103dbe2-kube-api-access-hpcv4\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585901 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjn6m\" (UniqueName: \"kubernetes.io/projected/44329a91-5654-4584-9009-4ca6f7e45584-kube-api-access-mjn6m\") pod \"migrator-866fcbc849-lxtfd\" (UID: \"44329a91-5654-4584-9009-4ca6f7e45584\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585918 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c318bc6-d06b-45e4-a256-a74767b40a60-config\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585941 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586025 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586050 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586068 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-certs\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586103 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2zm\" (UniqueName: \"kubernetes.io/projected/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-kube-api-access-lc2zm\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586121 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2h9k\" (UniqueName: \"kubernetes.io/projected/7c318bc6-d06b-45e4-a256-a74767b40a60-kube-api-access-n2h9k\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586181 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-node-bootstrap-token\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586195 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38e2fa84-50e3-4aa5-9269-6e423103dbe2-webhook-certs\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586258 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.589966 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.598008 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.598412 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.098397126 +0000 UTC m=+116.612854861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: W0218 00:10:32.608527 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9acc779e_6e10_4bc7_851f_c14ba843c057.slice/crio-ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e WatchSource:0}: Error finding container ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e: Status 404 returned error can't find the container with id ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.619594 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:32 crc kubenswrapper[5121]: W0218 00:10:32.641348 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e287aff_1485_4233_8648_ece2622ccf37.slice/crio-b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869 WatchSource:0}: Error finding container b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869: Status 404 returned error can't find the container with id b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869 Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.664110 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.669765 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" podStartSLOduration=93.669515394 podStartE2EDuration="1m33.669515394s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:32.661579723 +0000 UTC m=+116.176037468" watchObservedRunningTime="2026-02-18 00:10:32.669515394 +0000 UTC m=+116.183973139" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.689285 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.689770 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-node-bootstrap-token\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.689806 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38e2fa84-50e3-4aa5-9269-6e423103dbe2-webhook-certs\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.698612 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.198589539 +0000 UTC m=+116.713047274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.699520 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.699688 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c318bc6-d06b-45e4-a256-a74767b40a60-serving-cert\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700083 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700108 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700175 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700195 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hpcv4\" (UniqueName: \"kubernetes.io/projected/38e2fa84-50e3-4aa5-9269-6e423103dbe2-kube-api-access-hpcv4\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.702375 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.702914 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.202898254 +0000 UTC m=+116.717355989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704035 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mjn6m\" (UniqueName: \"kubernetes.io/projected/44329a91-5654-4584-9009-4ca6f7e45584-kube-api-access-mjn6m\") pod \"migrator-866fcbc849-lxtfd\" (UID: \"44329a91-5654-4584-9009-4ca6f7e45584\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704094 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c318bc6-d06b-45e4-a256-a74767b40a60-config\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704147 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704314 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704436 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704460 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-certs\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2zm\" (UniqueName: \"kubernetes.io/projected/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-kube-api-access-lc2zm\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704925 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2h9k\" (UniqueName: \"kubernetes.io/projected/7c318bc6-d06b-45e4-a256-a74767b40a60-kube-api-access-n2h9k\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.705002 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.705977 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.709899 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.721177 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c318bc6-d06b-45e4-a256-a74767b40a60-config\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.721493 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.755500 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c318bc6-d06b-45e4-a256-a74767b40a60-serving-cert\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.756161 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-node-bootstrap-token\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.756657 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.756895 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.758913 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpcv4\" (UniqueName: \"kubernetes.io/projected/38e2fa84-50e3-4aa5-9269-6e423103dbe2-kube-api-access-hpcv4\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.759980 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38e2fa84-50e3-4aa5-9269-6e423103dbe2-webhook-certs\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.766402 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.766967 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-certs\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.809455 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.810347 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.31032069 +0000 UTC m=+116.824778425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.810742 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.811024 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.311017848 +0000 UTC m=+116.825475583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.819123 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2h9k\" (UniqueName: \"kubernetes.io/projected/7c318bc6-d06b-45e4-a256-a74767b40a60-kube-api-access-n2h9k\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.833099 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2zm\" (UniqueName: \"kubernetes.io/projected/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-kube-api-access-lc2zm\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.837352 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.864260 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjn6m\" (UniqueName: \"kubernetes.io/projected/44329a91-5654-4584-9009-4ca6f7e45584-kube-api-access-mjn6m\") pod \"migrator-866fcbc849-lxtfd\" (UID: \"44329a91-5654-4584-9009-4ca6f7e45584\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.880406 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.881978 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.911756 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.912232 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.41219423 +0000 UTC m=+116.926651965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.922915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" event={"ID":"4ead99f6-fe0b-418e-b25c-06d177458b2a","Type":"ContainerStarted","Data":"1f2a145db7b768525fc203c9903cddb7b33069c700d92fe1f80c8c26d0d02829"} Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.954021 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.976490 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" event={"ID":"8724461b-b94b-4f4a-9c9f-4a131b9e02c2","Type":"ContainerStarted","Data":"93d25e22af9ffc3ca13c60846acc7c1b4739d1c9a575fe62e5fcf5bc43f3b946"} Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.988360 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerStarted","Data":"ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.005939 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerStarted","Data":"7cf679dc87cc0f98dab0f8bb142577dba1f068fedf9d23c8c1b34c6b0f64ee77"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.007723 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" event={"ID":"5e287aff-1485-4233-8648-ece2622ccf37","Type":"ContainerStarted","Data":"b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.014009 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.014416 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.514398757 +0000 UTC m=+117.028856492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.017705 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" event={"ID":"69df6480-3d02-4112-b8db-3507dd5a5f49","Type":"ContainerStarted","Data":"bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.021412 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" event={"ID":"9c0d1702-8700-443c-9bf2-afa4222bd41c","Type":"ContainerStarted","Data":"ee06a2eaa9d9a14b66b7bb3791daece348f4bc674bc37c8a7b0df57f24bbdbbe"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.039596 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.081964 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.116173 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.116850 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.616827961 +0000 UTC m=+117.131285706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.162180 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" podStartSLOduration=95.162156824 podStartE2EDuration="1m35.162156824s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.133112775 +0000 UTC m=+116.647570510" watchObservedRunningTime="2026-02-18 00:10:33.162156824 +0000 UTC m=+116.676614559" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.164087 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zvwwb"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.210953 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" podStartSLOduration=94.210935337 podStartE2EDuration="1m34.210935337s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.17426008 +0000 UTC m=+116.688717825" watchObservedRunningTime="2026-02-18 00:10:33.210935337 +0000 UTC m=+116.725393082" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.218926 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.219373 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.719355117 +0000 UTC m=+117.233812852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.293493 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" podStartSLOduration=94.293470182 podStartE2EDuration="1m34.293470182s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.292835735 +0000 UTC m=+116.807293470" watchObservedRunningTime="2026-02-18 00:10:33.293470182 +0000 UTC m=+116.807927927" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.320205 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.323154 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.823129586 +0000 UTC m=+117.337587321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.422132 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.423769 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.923753533 +0000 UTC m=+117.438211338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.526019 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.526349 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.0263242 +0000 UTC m=+117.540781925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: W0218 00:10:33.527734 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4dec16_09b2_4707_a2f6_f502d32b4fb8.slice/crio-ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344 WatchSource:0}: Error finding container ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344: Status 404 returned error can't find the container with id ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344 Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.630054 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.630429 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.130415417 +0000 UTC m=+117.644873152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.651174 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" podStartSLOduration=95.651154568 podStartE2EDuration="1m35.651154568s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.615734024 +0000 UTC m=+117.130191779" watchObservedRunningTime="2026-02-18 00:10:33.651154568 +0000 UTC m=+117.165612303" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.730975 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.731413 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.231396823 +0000 UTC m=+117.745854558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.812851 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" podStartSLOduration=95.812829639 podStartE2EDuration="1m35.812829639s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.811940395 +0000 UTC m=+117.326398140" watchObservedRunningTime="2026-02-18 00:10:33.812829639 +0000 UTC m=+117.327287374" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.839427 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.840106 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.34009245 +0000 UTC m=+117.854550185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.854609 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7b8sg"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.888158 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.895638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qmtl4"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.917436 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.941594 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.941961 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.441943269 +0000 UTC m=+117.956401004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.022913 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbdd0c4c_8844_44cd_885a_c2b40db8dcb4.slice/crio-3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509 WatchSource:0}: Error finding container 3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509: Status 404 returned error can't find the container with id 3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.045257 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.045710 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.545694237 +0000 UTC m=+118.060151972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.047984 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbdb0e57_487f_44df_bfea_01e173ebb1e3.slice/crio-e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1 WatchSource:0}: Error finding container e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1: Status 404 returned error can't find the container with id e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.058695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerStarted","Data":"a1262385f4cc216d17b492cfe05587103e0e9b5d3a5679bc058236c244b28b63"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.059577 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29522880-hmpf4" podStartSLOduration=96.059564929 podStartE2EDuration="1m36.059564929s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.010927429 +0000 UTC m=+117.525385174" watchObservedRunningTime="2026-02-18 00:10:34.059564929 +0000 UTC m=+117.574022674" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.064724 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" event={"ID":"a3597721-7184-4c2a-8050-ccec6fa345e4","Type":"ContainerStarted","Data":"f510b314c37ea5aa5b6f533d4de607061ea409f8191aa27663cd155956f43fdd"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.064777 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" event={"ID":"a3597721-7184-4c2a-8050-ccec6fa345e4","Type":"ContainerStarted","Data":"16ab1002607b82e28ce64fa66aeb70d32c5d21971292cd555f66e076a7ee878e"} Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.073940 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0720e131_2f16_4741_bef5_fa81e51085a8.slice/crio-be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5 WatchSource:0}: Error finding container be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5: Status 404 returned error can't find the container with id be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.077118 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" event={"ID":"0e4dec16-09b2-4707-a2f6-f502d32b4fb8","Type":"ContainerStarted","Data":"ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.083722 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vn45p" event={"ID":"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1","Type":"ContainerStarted","Data":"e8758536e80a5ee0cdc87084ec4fe6e239c7c2b24774b2a2c3c8af7ce73e9cd6"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.083805 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vn45p" event={"ID":"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1","Type":"ContainerStarted","Data":"c6638e39589d920dc2d02de998bd9ec9dd558f4591b874cfddb785f7e59686b5"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.093411 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" event={"ID":"5e287aff-1485-4233-8648-ece2622ccf37","Type":"ContainerStarted","Data":"524fda9e36c5660caad1709d2992481c8479ec11e86aef492234d14b781402b4"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.105588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" event={"ID":"9c0d1702-8700-443c-9bf2-afa4222bd41c","Type":"ContainerStarted","Data":"33d1d8377d58687970fadd809be92b41d6b6cb2794a75052ad252321b9d10942"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.135695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerStarted","Data":"ce83ab25e1e8e9f955af7b1409e400ceb125028d31573c59d7119d8ace62ac10"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.148861 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.152072 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.652051513 +0000 UTC m=+118.166509248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.192181 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7b8sg" event={"ID":"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4","Type":"ContainerStarted","Data":"3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.198441 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" event={"ID":"8724461b-b94b-4f4a-9c9f-4a131b9e02c2","Type":"ContainerStarted","Data":"e39325a0310e88546bb1492440a0e4d5c5e0531d5beedc2393f5dc9e390153bb"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.235836 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.237893 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.248083 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.252282 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.252777 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.752762862 +0000 UTC m=+118.267220587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.253976 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rsbpp"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.277715 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.278292 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.285762 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-mkw5h"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.305950 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v9jcr"] Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.312566 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3804ba_f1a0_4e30_9bfb_a6ebc39f7cd1.slice/crio-5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96 WatchSource:0}: Error finding container 5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96: Status 404 returned error can't find the container with id 5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.316610 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.320283 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.320473 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" podStartSLOduration=96.320442888 podStartE2EDuration="1m36.320442888s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.318010546 +0000 UTC m=+117.832468281" watchObservedRunningTime="2026-02-18 00:10:34.320442888 +0000 UTC m=+117.834900643" Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.347129 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46e61bd_a38a_4792_98ee_067e427538c9.slice/crio-40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827 WatchSource:0}: Error finding container 40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827: Status 404 returned error can't find the container with id 40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.353048 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.354716 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.854693263 +0000 UTC m=+118.369150998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.436111 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.454563 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.455513 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.456415 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.956387947 +0000 UTC m=+118.470845692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.489099 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.489746 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.495409 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-vlht9"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.498356 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p8ssx"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.501462 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.518189 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.522955 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h64q4"] Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.526768 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b1e56fa_e38b_48bc_9768_0bc82aca0a0c.slice/crio-6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c WatchSource:0}: Error finding container 6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c: Status 404 returned error can't find the container with id 6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.528870 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.530590 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.559321 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.559690 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.059673873 +0000 UTC m=+118.574131608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.578388 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.614170 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podStartSLOduration=96.614151125 podStartE2EDuration="1m36.614151125s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.592691255 +0000 UTC m=+118.107149190" watchObservedRunningTime="2026-02-18 00:10:34.614151125 +0000 UTC m=+118.128608850" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.615447 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" podStartSLOduration=96.615441259 podStartE2EDuration="1m36.615441259s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.613817937 +0000 UTC m=+118.128275682" watchObservedRunningTime="2026-02-18 00:10:34.615441259 +0000 UTC m=+118.129898994" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.661151 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.664336 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.164317105 +0000 UTC m=+118.678775020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.685559 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" podStartSLOduration=96.685529118 podStartE2EDuration="1m36.685529118s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.680944619 +0000 UTC m=+118.195402354" watchObservedRunningTime="2026-02-18 00:10:34.685529118 +0000 UTC m=+118.199986853" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.701631 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vn45p" podStartSLOduration=5.701612749 podStartE2EDuration="5.701612749s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.700881629 +0000 UTC m=+118.215339364" watchObservedRunningTime="2026-02-18 00:10:34.701612749 +0000 UTC m=+118.216070484" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.765899 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.766419 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.266218945 +0000 UTC m=+118.780676700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.766856 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.769005 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.268993838 +0000 UTC m=+118.783451573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.869574 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.870095 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.370048905 +0000 UTC m=+118.884506640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.870933 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.871364 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.371356589 +0000 UTC m=+118.885814324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.969269 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.969349 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.973973 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.974292 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.474265765 +0000 UTC m=+118.988723500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.976452 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.978373 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.47829592 +0000 UTC m=+118.992753655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.981850 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.072060 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.081692 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.082233 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.582207563 +0000 UTC m=+119.096665298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.167022 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.173100 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:35 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:35 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:35 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.173153 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.185518 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.187581 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.687564564 +0000 UTC m=+119.202022299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.289058 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.289472 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.789417451 +0000 UTC m=+119.303875196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.295791 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.296306 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.796280771 +0000 UTC m=+119.310738506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.334707 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e4dec16-09b2-4707-a2f6-f502d32b4fb8" containerID="98443954ec3593da1274d3bcef771583dcff25c4f81e7b3cbcc6a0883e483dea" exitCode=0 Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.334846 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" event={"ID":"0e4dec16-09b2-4707-a2f6-f502d32b4fb8","Type":"ContainerDied","Data":"98443954ec3593da1274d3bcef771583dcff25c4f81e7b3cbcc6a0883e483dea"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.379175 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" event={"ID":"7c318bc6-d06b-45e4-a256-a74767b40a60","Type":"ContainerStarted","Data":"ab1d925c968d47f2408727cc7fc2a524c16033995c8ea19954ecfa00650c8979"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.383532 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" event={"ID":"38e2fa84-50e3-4aa5-9269-6e423103dbe2","Type":"ContainerStarted","Data":"7d83d07068fffeffa307275920be1250501974d71f35f484089eb2aafc67f81e"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.385307 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h64q4" event={"ID":"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac","Type":"ContainerStarted","Data":"fe6095032203997db84a66b4ac6bc9a60918837453b8552b5b8c37d8f2860fcb"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.393915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" event={"ID":"0d3e4d34-c74d-4572-aca8-da4c6c85fa79","Type":"ContainerStarted","Data":"5cfaa198d1c53ac88755f51dae35e88917dba5760df800689c0f2305b60bd633"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.398002 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.399987 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.899949677 +0000 UTC m=+119.414407412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.418617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" event={"ID":"33af1cb9-6bf3-4a05-8884-c2e1ae482ada","Type":"ContainerStarted","Data":"2bee31f82bdc0f2cc43d8731caa55b7da3880fa6605ae9956d41530ef635988c"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.418994 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" event={"ID":"33af1cb9-6bf3-4a05-8884-c2e1ae482ada","Type":"ContainerStarted","Data":"93d2a97bea82695deb6119a63daa54916d6a4b6bdd3fc7907dbd8e150f22ac5f"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.436538 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" event={"ID":"5e287aff-1485-4233-8648-ece2622ccf37","Type":"ContainerStarted","Data":"1339772cc48b1a0b2dbd5f15337cea2c465de890e8e947ce732ca120df093ee6"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.442915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" event={"ID":"bbdb0e57-487f-44df-bfea-01e173ebb1e3","Type":"ContainerStarted","Data":"ac2750573ce29b122cf6b672117a9a63563998bc48c114f0b6d8de00c608c37a"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.442978 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" event={"ID":"bbdb0e57-487f-44df-bfea-01e173ebb1e3","Type":"ContainerStarted","Data":"e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.444333 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.449823 5121 patch_prober.go:28] interesting pod/console-operator-67c89758df-qmtl4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.449902 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" podUID="bbdb0e57-487f-44df-bfea-01e173ebb1e3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.451544 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" event={"ID":"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe","Type":"ContainerStarted","Data":"6cd544e195b0649cf1787498d849054212c475bf78760c818b8ede8cfdd0393a"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.482922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" event={"ID":"4cab190f-d97b-45f5-8875-eb96fc357e91","Type":"ContainerStarted","Data":"fb3f0db1e232b21db1b3649c629c4cd2168577f8008392e223f109f67dcb7d1b"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.483277 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" event={"ID":"4cab190f-d97b-45f5-8875-eb96fc357e91","Type":"ContainerStarted","Data":"6e98c1095e7d2a31fbb41a2881ca420c8be83284faf532444b2b37fd881d93d9"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.504911 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" podStartSLOduration=97.504870906 podStartE2EDuration="1m37.504870906s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.497192536 +0000 UTC m=+119.011650271" watchObservedRunningTime="2026-02-18 00:10:35.504870906 +0000 UTC m=+119.019328641" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.507971 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" podStartSLOduration=97.507948426 podStartE2EDuration="1m37.507948426s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.451901973 +0000 UTC m=+118.966359728" watchObservedRunningTime="2026-02-18 00:10:35.507948426 +0000 UTC m=+119.022406161" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.519591 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.521935 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.02192069 +0000 UTC m=+119.536378425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.532149 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" podStartSLOduration=97.532130278 podStartE2EDuration="1m37.532130278s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.527134417 +0000 UTC m=+119.041592152" watchObservedRunningTime="2026-02-18 00:10:35.532130278 +0000 UTC m=+119.046588003" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.591289 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerStarted","Data":"1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.605249 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.620500 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.620803 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.120753771 +0000 UTC m=+119.635211506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.621267 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.621833 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.121820879 +0000 UTC m=+119.636278614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.629810 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rsbpp" event={"ID":"b46e61bd-a38a-4792-98ee-067e427538c9","Type":"ContainerStarted","Data":"40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.643836 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podStartSLOduration=6.643812233 podStartE2EDuration="6.643812233s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.630396292 +0000 UTC m=+119.144854027" watchObservedRunningTime="2026-02-18 00:10:35.643812233 +0000 UTC m=+119.158269968" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.723533 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.724879 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.224839487 +0000 UTC m=+119.739297222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.730455 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" event={"ID":"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36","Type":"ContainerStarted","Data":"88cc256e548c340abf25e905fa09540c865609cc35ca0674f05bd495618cfee7"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.740770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" event={"ID":"4ead99f6-fe0b-418e-b25c-06d177458b2a","Type":"ContainerStarted","Data":"3bfd7d2df98a7f8f131b6a896227f3b6080d444d029c8e32b90399f1813c0a26"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.740834 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" event={"ID":"4ead99f6-fe0b-418e-b25c-06d177458b2a","Type":"ContainerStarted","Data":"b5adce8767a51e5bc604cfd80d2b7076394db921157752a65b785676c8d8f897"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.753331 5121 generic.go:358] "Generic (PLEG): container finished" podID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerID="a1262385f4cc216d17b492cfe05587103e0e9b5d3a5679bc058236c244b28b63" exitCode=0 Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.753582 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerDied","Data":"a1262385f4cc216d17b492cfe05587103e0e9b5d3a5679bc058236c244b28b63"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.759502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" event={"ID":"efe976a0-6ea6-4283-8b7c-97caa4f2111b","Type":"ContainerStarted","Data":"06924237f5a12bb896de56c20ffcf59ee476f318eac0ef1cb36097a01118f830"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.770079 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" event={"ID":"a3597721-7184-4c2a-8050-ccec6fa345e4","Type":"ContainerStarted","Data":"b221f77b42431999856b0c1f1b2be6e67c1ba4b2b16da33f41cd28cd6a34fe03"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.787892 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.807030 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" event={"ID":"44329a91-5654-4584-9009-4ca6f7e45584","Type":"ContainerStarted","Data":"6aa6d6abd9029292beaa999313391063e7dfc8560a9303041ad206ea141a32a3"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.819021 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" podStartSLOduration=97.818958835 podStartE2EDuration="1m37.818958835s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.797938886 +0000 UTC m=+119.312396631" watchObservedRunningTime="2026-02-18 00:10:35.818958835 +0000 UTC m=+119.333416570" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.823146 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerStarted","Data":"25c57dfacc7f0b438706aa06d3aa285e6b50e71a419c939596c433e834b52465"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.825457 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.827015 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.326976483 +0000 UTC m=+119.841434218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.848510 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" podStartSLOduration=97.848487926 podStartE2EDuration="1m37.848487926s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.846189055 +0000 UTC m=+119.360646790" watchObservedRunningTime="2026-02-18 00:10:35.848487926 +0000 UTC m=+119.362945671" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.852015 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerStarted","Data":"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.852078 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerStarted","Data":"a35c1a8554f97c336c169b9b7ab07394eb161632ed304015d160d6c0a71bba70"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.853317 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.859191 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"b024dc4376a64f651df0bc3a112fbec788ffb545ed121d64a800b2fd5c634f79"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.863179 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" event={"ID":"49d45bda-ec47-407b-b527-c7267c3825c0","Type":"ContainerStarted","Data":"51961dc119cadddbf2dc9028d04b910997bdf07db6c6c353cd0c340132d26dc4"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.863771 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-78c6t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.863828 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.867696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" event={"ID":"0720e131-2f16-4741-bef5-fa81e51085a8","Type":"ContainerStarted","Data":"c66d8c86b7cf91f2340bd4fcd17f8865efc3566d23d8bf8e1c6a3a7a22463807"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.867753 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" event={"ID":"0720e131-2f16-4741-bef5-fa81e51085a8","Type":"ContainerStarted","Data":"be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.884621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" event={"ID":"69df6480-3d02-4112-b8db-3507dd5a5f49","Type":"ContainerStarted","Data":"c08178fc805e104d9dd7741e337c5cef0fa68324bb63e67b395a6b82d2a6f76d"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.887600 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" event={"ID":"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1","Type":"ContainerStarted","Data":"7b11752f9095d2aca0ab2ad88d0eac8a8324d281d1ce6c2a4e0f148ea173c786"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.887644 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" event={"ID":"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1","Type":"ContainerStarted","Data":"5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.889722 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerStarted","Data":"f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.889770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerStarted","Data":"b0ff9640da837eaf58669b3a6f94ba55e0318bc5c67cb59a44276b751785d59e"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.890529 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.894434 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" event={"ID":"aa83ca9d-be38-4710-ace7-571b9e8b43dc","Type":"ContainerStarted","Data":"d2dd15319c3c0d7b810774c2cd6f8ebc724de4699bc217763b9dc02b5c38099d"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.894468 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" event={"ID":"aa83ca9d-be38-4710-ace7-571b9e8b43dc","Type":"ContainerStarted","Data":"a501b55e6d2433284c8276cd7d104a50c452023038d145f65579b91148df8ee3"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.904164 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.904253 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.905180 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7b8sg" event={"ID":"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4","Type":"ContainerStarted","Data":"45d8169f3fe2de7c4be4ab685de632eb5e0feb714a533aeadf85ed267cb47308"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.909510 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" event={"ID":"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c","Type":"ContainerStarted","Data":"f42ca4ac8b9a4b25bc2fb0f29333360a49f661f685caa1d9b319924894d017a6"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.909569 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" event={"ID":"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c","Type":"ContainerStarted","Data":"6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.912427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-vlht9" event={"ID":"db5b1911-47a0-41f1-b793-924df4056e20","Type":"ContainerStarted","Data":"c7f55367ee840399ddf2795f7b6fc4b4849c5a6a6e4fa3704d578be904566f40"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.914979 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" event={"ID":"1c0a3ab2-4ddb-4472-af47-3471a18714be","Type":"ContainerStarted","Data":"d69dc2c05a7778796657145db3a981f3688e5f3673d5df17aee60bfd65526682"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.915746 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.926901 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.928478 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" podStartSLOduration=96.928453463 podStartE2EDuration="1m36.928453463s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.900131124 +0000 UTC m=+119.414588879" watchObservedRunningTime="2026-02-18 00:10:35.928453463 +0000 UTC m=+119.442911218" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.928585 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.428553015 +0000 UTC m=+119.943010750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.928621 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podStartSLOduration=96.928613427 podStartE2EDuration="1m36.928613427s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.927712233 +0000 UTC m=+119.442169978" watchObservedRunningTime="2026-02-18 00:10:35.928613427 +0000 UTC m=+119.443071162" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.928943 5121 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jp5zf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:5443/healthz\": dial tcp 10.217.0.16:5443: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.929010 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" podUID="1c0a3ab2-4ddb-4472-af47-3471a18714be" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.16:5443/healthz\": dial tcp 10.217.0.16:5443: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.932062 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" event={"ID":"21a8987a-ee46-4b59-b949-55032c182585","Type":"ContainerStarted","Data":"17b8ab3a0733a0bf822d019e6838ae09124d1851a1cb5d677a5de4f0211060c0"} Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.030854 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.030942 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.030981 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.031057 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.031210 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.067216 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.567195394 +0000 UTC m=+120.081653129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.067230 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" podStartSLOduration=98.067210895 podStartE2EDuration="1m38.067210895s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.061409703 +0000 UTC m=+119.575867438" watchObservedRunningTime="2026-02-18 00:10:36.067210895 +0000 UTC m=+119.581668620" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.073210 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.080754 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" podStartSLOduration=98.080730088 podStartE2EDuration="1m38.080730088s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.988086359 +0000 UTC m=+119.502544094" watchObservedRunningTime="2026-02-18 00:10:36.080730088 +0000 UTC m=+119.595187853" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.091685 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-mkw5h" podStartSLOduration=98.091641022 podStartE2EDuration="1m38.091641022s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.080616924 +0000 UTC m=+119.595074659" watchObservedRunningTime="2026-02-18 00:10:36.091641022 +0000 UTC m=+119.606098757" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.092627 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.095524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.096107 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.149006 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" podStartSLOduration=97.148980239 podStartE2EDuration="1m37.148980239s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.123383481 +0000 UTC m=+119.637841216" watchObservedRunningTime="2026-02-18 00:10:36.148980239 +0000 UTC m=+119.663437984" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.150403 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" podStartSLOduration=97.150395006 podStartE2EDuration="1m37.150395006s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.149563895 +0000 UTC m=+119.664021630" watchObservedRunningTime="2026-02-18 00:10:36.150395006 +0000 UTC m=+119.664852751" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.178497 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.178901 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.180529 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.680508822 +0000 UTC m=+120.194966557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.186687 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.195431 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:36 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:36 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:36 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.195561 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.206975 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7b8sg" podStartSLOduration=98.206953782 podStartE2EDuration="1m38.206953782s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.180262705 +0000 UTC m=+119.694720440" watchObservedRunningTime="2026-02-18 00:10:36.206953782 +0000 UTC m=+119.721411517" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.273355 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.280814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.280995 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.281226 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.781207971 +0000 UTC m=+120.295665706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.302722 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.315858 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.383998 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.384328 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.884304221 +0000 UTC m=+120.398761956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.488773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.489433 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.989398535 +0000 UTC m=+120.503856270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.489662 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" podStartSLOduration=98.489617821 podStartE2EDuration="1m38.489617821s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.206828909 +0000 UTC m=+119.721286644" watchObservedRunningTime="2026-02-18 00:10:36.489617821 +0000 UTC m=+120.004075586" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.490912 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.594252 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.594718 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.094694284 +0000 UTC m=+120.609152019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.695841 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.697257 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.19723509 +0000 UTC m=+120.711692825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.801328 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.801946 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.301920093 +0000 UTC m=+120.816377828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: W0218 00:10:36.847832 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac WatchSource:0}: Error finding container d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac: Status 404 returned error can't find the container with id d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.910297 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.911791 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.41177153 +0000 UTC m=+120.926229265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.917047 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mlvtl"] Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.976374 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" event={"ID":"0e4dec16-09b2-4707-a2f6-f502d32b4fb8","Type":"ContainerStarted","Data":"ddb96af64af9a1b96dc503affdf84bb0f278e8b4d80c1d62e954412f25b0acb9"} Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.977596 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.008329 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" podStartSLOduration=99.00829679 podStartE2EDuration="1m39.00829679s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.003778002 +0000 UTC m=+120.518235747" watchObservedRunningTime="2026-02-18 00:10:37.00829679 +0000 UTC m=+120.522754525" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.013579 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" event={"ID":"7c318bc6-d06b-45e4-a256-a74767b40a60","Type":"ContainerStarted","Data":"015e8f11d2a9a79b540e221801bbe33065f9504fa551ab77e9a2334adfc58dbe"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.017174 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.018065 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.518047464 +0000 UTC m=+121.032505199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.024456 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" event={"ID":"38e2fa84-50e3-4aa5-9269-6e423103dbe2","Type":"ContainerStarted","Data":"710e9623a093a8507c8d2d24970de5d7231d0839b719a7e44171780f1f1d07fa"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.071232 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h64q4" event={"ID":"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac","Type":"ContainerStarted","Data":"ae75c7dab40e2507c764d37a0a076d3421de6c93481d95347f5050699809a855"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.081740 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" event={"ID":"0d3e4d34-c74d-4572-aca8-da4c6c85fa79","Type":"ContainerStarted","Data":"4842ad275445dc936d507a93e417969263d66f2e2f4b36fcd33f63046f26aacd"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.087427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" event={"ID":"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe","Type":"ContainerStarted","Data":"387d0d0e4dd13a423b159b27672d061a0fad21db163e790a174dd6baf0cf05ac"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.092021 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.096088 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" event={"ID":"4cab190f-d97b-45f5-8875-eb96fc357e91","Type":"ContainerStarted","Data":"97239879edff5b2ac7dd189ded9fbf06e0ae356f38969dee512df23c1be4d1c3"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.099765 5121 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-wwrwg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.099817 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" podUID="a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.103916 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rsbpp" event={"ID":"b46e61bd-a38a-4792-98ee-067e427538c9","Type":"ContainerStarted","Data":"d399d8db6c76d9d41b85c00f9ffcea1e2fcea16b9a2ac7a70a5139a086d0ec9d"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.107343 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" event={"ID":"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36","Type":"ContainerStarted","Data":"ba9a9b208a0efb7f38aa86e6f9a71546ec7e591cdc48b4b85faa24567db1bdd6"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.107368 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" event={"ID":"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36","Type":"ContainerStarted","Data":"396fa4298ee473a268791b8a8ed6dbe4ed0b30abb84ff829b3e6d36594a3e1d0"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.109616 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" podStartSLOduration=98.109607854 podStartE2EDuration="1m38.109607854s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.108207757 +0000 UTC m=+120.622665492" watchObservedRunningTime="2026-02-18 00:10:37.109607854 +0000 UTC m=+120.624065599" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.109966 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" podStartSLOduration=98.109959924 podStartE2EDuration="1m38.109959924s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.037951584 +0000 UTC m=+120.552409319" watchObservedRunningTime="2026-02-18 00:10:37.109959924 +0000 UTC m=+120.624417659" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.118318 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.122311 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.622291045 +0000 UTC m=+121.136748780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.132101 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" event={"ID":"efe976a0-6ea6-4283-8b7c-97caa4f2111b","Type":"ContainerStarted","Data":"6ef31106189de0016484b234bfa9963eba9e0e03bcec3315c0b284f6645a1155"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.138455 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" event={"ID":"44329a91-5654-4584-9009-4ca6f7e45584","Type":"ContainerStarted","Data":"21deed03f27e9b52140c8fb82565a47e4f166478daaff6384a3266fab96d902a"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.138484 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" event={"ID":"44329a91-5654-4584-9009-4ca6f7e45584","Type":"ContainerStarted","Data":"1001fe23871e1464047c246f8246e9e7404b6b2c8a9b6e9766fac02ed97cd93b"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.173394 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" podStartSLOduration=98.173377549 podStartE2EDuration="1m38.173377549s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.15272455 +0000 UTC m=+120.667182295" watchObservedRunningTime="2026-02-18 00:10:37.173377549 +0000 UTC m=+120.687835284" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.174608 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" podStartSLOduration=98.174600821 podStartE2EDuration="1m38.174600821s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.172210238 +0000 UTC m=+120.686667973" watchObservedRunningTime="2026-02-18 00:10:37.174600821 +0000 UTC m=+120.689058556" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.184989 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:37 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:37 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:37 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.185094 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.224121 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.225358 5121 ???:1] "http: TLS handshake error from 192.168.126.11:46840: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.226343 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.726312371 +0000 UTC m=+121.240770276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.226370 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" event={"ID":"49d45bda-ec47-407b-b527-c7267c3825c0","Type":"ContainerStarted","Data":"1cd5861ec018e05961d939c2b91ff79cb20531e3dce160dc5b69a5fbb2d5f91e"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.245822 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" podStartSLOduration=98.24580045 podStartE2EDuration="1m38.24580045s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.206967776 +0000 UTC m=+120.721425531" watchObservedRunningTime="2026-02-18 00:10:37.24580045 +0000 UTC m=+120.760258195" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.246305 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h64q4" podStartSLOduration=8.246297162 podStartE2EDuration="8.246297162s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.244341032 +0000 UTC m=+120.758798767" watchObservedRunningTime="2026-02-18 00:10:37.246297162 +0000 UTC m=+120.760754917" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.283976 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" podStartSLOduration=98.283953865 podStartE2EDuration="1m38.283953865s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.283277217 +0000 UTC m=+120.797734962" watchObservedRunningTime="2026-02-18 00:10:37.283953865 +0000 UTC m=+120.798411620" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.346181 5121 ???:1] "http: TLS handshake error from 192.168.126.11:46852: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.351702 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.354764 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.854741343 +0000 UTC m=+121.369199078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.357215 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" event={"ID":"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1","Type":"ContainerStarted","Data":"4beef904caca877837f6e2e7a5ac7471338ece8101a91c76e505e707a7d33289"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.422055 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" event={"ID":"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c","Type":"ContainerStarted","Data":"e172a27cddb6ef22e49a42e49a7430ef15bbede061251331fb2bbcb6ab30630e"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.459997 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.461725 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.961707385 +0000 UTC m=+121.476165120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.500410 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.557801 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" podStartSLOduration=98.557781073 podStartE2EDuration="1m38.557781073s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.411098934 +0000 UTC m=+120.925556669" watchObservedRunningTime="2026-02-18 00:10:37.557781073 +0000 UTC m=+121.072238808" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.565128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.566144 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.066132341 +0000 UTC m=+121.580590076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.569990 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-vlht9" event={"ID":"db5b1911-47a0-41f1-b793-924df4056e20","Type":"ContainerStarted","Data":"ee075cfb8671768d603c3a02c902d05e21c9cb405d42244743e89f42d92d1e4a"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.580164 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49336: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.632303 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" event={"ID":"1c0a3ab2-4ddb-4472-af47-3471a18714be","Type":"ContainerStarted","Data":"8e43e078e00e9c1d1b2445ae3d01ba0dfa4f6d80e11a2bf4b3d54b230b7fbac6"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.666857 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.668173 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.168154764 +0000 UTC m=+121.682612499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.691339 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49348: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.692239 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" event={"ID":"21a8987a-ee46-4b59-b949-55032c182585","Type":"ContainerStarted","Data":"8cbc1c7c92dd9496c2b47a9860df67a282b9650526235404fe6a9388039430b8"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.693379 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.710804 5121 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-htdrd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.710909 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" podUID="21a8987a-ee46-4b59-b949-55032c182585" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.720377 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.727734 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" podStartSLOduration=99.727715839 podStartE2EDuration="1m39.727715839s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.726297292 +0000 UTC m=+121.240755027" watchObservedRunningTime="2026-02-18 00:10:37.727715839 +0000 UTC m=+121.242173574" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.729758 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-78c6t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.729816 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.731845 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.731932 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.769042 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.771634 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.271619605 +0000 UTC m=+121.786077340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.830388 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49358: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.869936 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.871973 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.371955944 +0000 UTC m=+121.886413679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.932581 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49366: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.974660 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.975055 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.475040694 +0000 UTC m=+121.989498429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.023785 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-vlht9" podStartSLOduration=99.023758016 podStartE2EDuration="1m39.023758016s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.02313758 +0000 UTC m=+121.537595315" watchObservedRunningTime="2026-02-18 00:10:38.023758016 +0000 UTC m=+121.538215751" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.037183 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49370: no serving certificate available for the kubelet" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.060558 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" podStartSLOduration=99.060533247 podStartE2EDuration="1m39.060533247s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.057291522 +0000 UTC m=+121.571749257" watchObservedRunningTime="2026-02-18 00:10:38.060533247 +0000 UTC m=+121.574990992" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.076350 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.076628 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.576612546 +0000 UTC m=+122.091070281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.177856 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.178404 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.678381473 +0000 UTC m=+122.192839208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.180420 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:38 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:38 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:38 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.180502 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.192806 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49374: no serving certificate available for the kubelet" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.280907 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.281149 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.281254 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.781224097 +0000 UTC m=+122.295681842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.308743 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" podStartSLOduration=99.308716155 podStartE2EDuration="1m39.308716155s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.118473059 +0000 UTC m=+121.632930804" watchObservedRunningTime="2026-02-18 00:10:38.308716155 +0000 UTC m=+121.823173890" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.383613 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"9acc779e-6e10-4bc7-851f-c14ba843c057\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.383738 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"9acc779e-6e10-4bc7-851f-c14ba843c057\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.384099 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"9acc779e-6e10-4bc7-851f-c14ba843c057\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.384634 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.384777 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume" (OuterVolumeSpecName: "config-volume") pod "9acc779e-6e10-4bc7-851f-c14ba843c057" (UID: "9acc779e-6e10-4bc7-851f-c14ba843c057"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.385069 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.385127 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.885111868 +0000 UTC m=+122.399569603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.403211 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9" (OuterVolumeSpecName: "kube-api-access-9xzk9") pod "9acc779e-6e10-4bc7-851f-c14ba843c057" (UID: "9acc779e-6e10-4bc7-851f-c14ba843c057"). InnerVolumeSpecName "kube-api-access-9xzk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.417898 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9acc779e-6e10-4bc7-851f-c14ba843c057" (UID: "9acc779e-6e10-4bc7-851f-c14ba843c057"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.487445 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.487910 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.487925 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.488023 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.988000385 +0000 UTC m=+122.502458110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.503118 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.589533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.590017 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.089998387 +0000 UTC m=+122.604456122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.633152 5121 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jp5zf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:5443/healthz\": context deadline exceeded" start-of-body= Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.633235 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" podUID="1c0a3ab2-4ddb-4472-af47-3471a18714be" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.16:5443/healthz\": context deadline exceeded" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.690801 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.691118 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.191075706 +0000 UTC m=+122.705533441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.691473 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.691922 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.191900206 +0000 UTC m=+122.706358101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.749368 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4006df6e7fe1c2990ba899f6e2ee5473fe685dffbbdbde3cdb20d2bdc6284361"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.749471 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4c3413ea01dd4a6caa2300ed225d35ec63e10da7da2d72a3c8c15034cff4d9a0"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.749918 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.753001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" event={"ID":"5b49811f-e44a-43e9-80e6-15fcc9ed145f","Type":"ContainerStarted","Data":"fc3bacf49d92746313d1f8cbebd9a26dab5972835b1ea8f54d4f6d893586b1da"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.753055 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" event={"ID":"5b49811f-e44a-43e9-80e6-15fcc9ed145f","Type":"ContainerStarted","Data":"bd49ad1e7370857b20e97cd3391712ddc59b000d62d54f58d2dc854e3300790b"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.753077 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" event={"ID":"5b49811f-e44a-43e9-80e6-15fcc9ed145f","Type":"ContainerStarted","Data":"c3811a05fdbae324fa81ffcc6bd170ffa16b14a78aa2366d180b0bcf7b0afb23"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.761621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"1b672ddde30963505315a85cf20add041f74e112fb4cb73b91bfaf63f601b3d4"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.773213 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" event={"ID":"38e2fa84-50e3-4aa5-9269-6e423103dbe2","Type":"ContainerStarted","Data":"8b6b2ea5802f3eadd6eb8c3bfdb4d8e6f668bfee80c791e394d00f0da842cd27"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.775855 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rsbpp" event={"ID":"b46e61bd-a38a-4792-98ee-067e427538c9","Type":"ContainerStarted","Data":"275a09ca35519ecfaf83ff68694973c0ea702d66df8ce2771a2d7345fe4c99e8"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.776235 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.780167 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.780151 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerDied","Data":"ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.780307 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.790613 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"e75c9c7b1ce69c598f266da7896fbce34f8efad43cc5c7d70a6aec71cd142532"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.790728 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"f33322cd45cb8003bee7c99557ce59ac78866179f84aa6084b18dc68d7cc7b19"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.791614 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" gracePeriod=30 Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.792227 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.792276 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.792334 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.792519 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.292503673 +0000 UTC m=+122.806961408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.799672 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.803006 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.804043 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.811215 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" podStartSLOduration=99.811184071 podStartE2EDuration="1m39.811184071s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.799966298 +0000 UTC m=+122.314424033" watchObservedRunningTime="2026-02-18 00:10:38.811184071 +0000 UTC m=+122.325641806" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.820996 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.829372 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mlvtl" podStartSLOduration=100.829351905 podStartE2EDuration="1m40.829351905s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.827146937 +0000 UTC m=+122.341604672" watchObservedRunningTime="2026-02-18 00:10:38.829351905 +0000 UTC m=+122.343809640" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.895209 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.901833 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.401809756 +0000 UTC m=+122.916267491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.908045 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rsbpp" podStartSLOduration=9.908021759 podStartE2EDuration="9.908021759s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.868346903 +0000 UTC m=+122.382804638" watchObservedRunningTime="2026-02-18 00:10:38.908021759 +0000 UTC m=+122.422479504" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.920417 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49388: no serving certificate available for the kubelet" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.998330 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.998872 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.49885408 +0000 UTC m=+123.013311815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.103121 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.103654 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.603610424 +0000 UTC m=+123.118068159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.174902 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:39 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:39 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:39 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.175004 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.206975 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.207601 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.707572848 +0000 UTC m=+123.222030583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.311691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.312134 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.812107166 +0000 UTC m=+123.326564891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.412751 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.413035 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.912991149 +0000 UTC m=+123.427448884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.479987 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.480630 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerName="collect-profiles" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.480666 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerName="collect-profiles" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.480767 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerName="collect-profiles" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.500904 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.501121 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.506538 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.530941 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.531393 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.0313693 +0000 UTC m=+123.545827035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.632374 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.632599 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.132551961 +0000 UTC m=+123.647009696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633069 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633253 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633379 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633436 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.633816 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.133807603 +0000 UTC m=+123.648265338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.667157 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.680339 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.683112 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.695755 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.734926 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.735235 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.23518697 +0000 UTC m=+123.749644705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735548 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735781 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735817 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735952 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.736152 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.236136864 +0000 UTC m=+123.750594589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.736875 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.737016 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.766197 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.803957 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837483 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837668 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837702 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837751 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.837922 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.337897571 +0000 UTC m=+123.852355306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.850873 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.855223 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.871192 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.904491 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939103 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939166 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939314 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939385 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.941002 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.941227 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.942560 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.442542023 +0000 UTC m=+123.956999958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.977448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.995549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.041948 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.045094 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.045132 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.045175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.045361 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.545338416 +0000 UTC m=+124.059796141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.072918 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.159845 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.160551 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.160658 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.160746 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.161351 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.161415 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.161506 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.661485717 +0000 UTC m=+124.175943452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.170822 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:40 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:40 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:40 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.170906 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.187811 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.240028 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.263672 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.264047 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.764012553 +0000 UTC m=+124.278470288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.273969 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49400: no serving certificate available for the kubelet" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.366406 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.366995 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.866970972 +0000 UTC m=+124.381428707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.468342 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.468731 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.968690376 +0000 UTC m=+124.483148111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.475979 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476055 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476079 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476219 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476837 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.485153 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.485883 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.497956 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:10:40 crc kubenswrapper[5121]: W0218 00:10:40.508747 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6854ad9b_1632_47d4_82bc_bdd90768bc2a.slice/crio-0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd WatchSource:0}: Error finding container 0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd: Status 404 returned error can't find the container with id 0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.570964 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.571060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.571088 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.571211 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.576357 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.076312575 +0000 UTC m=+124.590770310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673132 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.673215 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.173196535 +0000 UTC m=+124.687654270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673437 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.673741 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.173734179 +0000 UTC m=+124.688191914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673889 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673936 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673956 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.674340 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.674542 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.715074 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.787040 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.787839 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.287820886 +0000 UTC m=+124.802278621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.799698 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.874211 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerStarted","Data":"b7ed7dc670ad2dcb9f8640d5f44b830e13e4f0554ae87aa8ba2653124a6f77c7"} Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.889187 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.889687 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.389668815 +0000 UTC m=+124.904126550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.901209 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"ba5ce9e402f3d620de01810fe1d74320f085bb14c8d9080e50266330e385fdc0"} Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.922734 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerStarted","Data":"0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd"} Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.982162 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.990955 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.991292 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.491267048 +0000 UTC m=+125.005724783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.094018 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.094402 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.594382919 +0000 UTC m=+125.108840804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.171888 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:41 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:41 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:41 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.172372 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.195219 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.195674 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.695622582 +0000 UTC m=+125.210080337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.280382 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.297939 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.298436 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.798414345 +0000 UTC m=+125.312872090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.399852 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.400797 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.900764066 +0000 UTC m=+125.415221801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.458326 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.478929 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.479302 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.495683 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.502197 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.502532 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.002517283 +0000 UTC m=+125.516975018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.595145 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604145 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604287 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604314 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.604366 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.10431368 +0000 UTC m=+125.618771415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604831 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.605068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.605668 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.105626314 +0000 UTC m=+125.620084049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.607925 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.609961 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.613981 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.621713 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706283 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.706417 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.206399734 +0000 UTC m=+125.720857469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706578 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706642 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706700 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706715 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706752 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.707164 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.707504 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.207480122 +0000 UTC m=+125.721937857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.707520 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.726676 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.808218 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.808594 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.308554911 +0000 UTC m=+125.823012656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.808905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.809074 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.809106 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.809222 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.809459 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.309436034 +0000 UTC m=+125.823893959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.811893 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.834666 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.852929 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.869305 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.870486 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.913830 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.914159 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.414142087 +0000 UTC m=+125.928599822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.921400 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.973897 5121 generic.go:358] "Generic (PLEG): container finished" podID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" exitCode=0 Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.974045 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a"} Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.984671 5121 generic.go:358] "Generic (PLEG): container finished" podID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" exitCode=0 Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.984770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181"} Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.994243 5121 generic.go:358] "Generic (PLEG): container finished" podID="af92a560-a657-450c-b3ad-baa6233127aa" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" exitCode=0 Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.994452 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54"} Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.994495 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerStarted","Data":"68089a9179b2ee54313136fab6546d018047ab31029619dfc6933c6ec3ac176c"} Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.015065 5121 generic.go:358] "Generic (PLEG): container finished" podID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" exitCode=0 Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.015327 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48"} Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.015368 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerStarted","Data":"e3aa645abbf5b996b104f5c41a2f1ccc97cd615ef2eb0ff0e26a4d5ea630790e"} Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.023592 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.023870 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.024072 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.024155 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.026152 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.526130811 +0000 UTC m=+126.040588546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.041676 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.067835 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.068055 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.079609 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.079913 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125343 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125508 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125560 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125624 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125673 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125700 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.126147 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.126234 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.626211972 +0000 UTC m=+126.140669707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.126446 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.158882 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.171726 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.190184 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:42 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:42 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:42 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.190254 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.227761 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.227817 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.227839 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.228428 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.229141 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.729129069 +0000 UTC m=+126.243586804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.238882 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.263795 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.329524 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.329811 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.829784547 +0000 UTC m=+126.344242282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.329878 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.330859 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.830848724 +0000 UTC m=+126.345306459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.340746 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.340802 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.365166 5121 patch_prober.go:28] interesting pod/console-64d44f6ddf-7b8sg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.365334 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7b8sg" podUID="dbdd0c4c-8844-44cd-885a-c2b40db8dcb4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.378269 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.378357 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.432074 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.432718 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.433975 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.933950286 +0000 UTC m=+126.448408021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.453499 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:10:42 crc kubenswrapper[5121]: W0218 00:10:42.483767 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787ee824_3e40_4929_9eda_a58528843d28.slice/crio-214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e WatchSource:0}: Error finding container 214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e: Status 404 returned error can't find the container with id 214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.524833 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.538943 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.539300 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.039285005 +0000 UTC m=+126.553742740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: W0218 00:10:42.543509 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod60adf0de_2267_4a37_abc8_6b97aec2d3bd.slice/crio-183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017 WatchSource:0}: Error finding container 183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017: Status 404 returned error can't find the container with id 183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017 Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.646576 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.646876 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.146858853 +0000 UTC m=+126.661316588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.720276 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.749347 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.749962 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.249937294 +0000 UTC m=+126.764395039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: W0218 00:10:42.753124 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0ed157_f5bd_43a5_b641_bfa4e8df62ff.slice/crio-003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34 WatchSource:0}: Error finding container 003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34: Status 404 returned error can't find the container with id 003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34 Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.770258 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.850566 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.850810 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.350792307 +0000 UTC m=+126.865250042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.871243 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.887880 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.888055 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.893176 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.893243 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49414: no serving certificate available for the kubelet" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.976772 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.977019 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.977158 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.977230 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.977724 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.477704739 +0000 UTC m=+126.992162484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.060980 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerStarted","Data":"003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.077275 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"60adf0de-2267-4a37-abc8-6b97aec2d3bd","Type":"ContainerStarted","Data":"183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.077947 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078076 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078116 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078155 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078733 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078787 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.078882 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.5788635 +0000 UTC m=+127.093321235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.084719 5121 generic.go:358] "Generic (PLEG): container finished" podID="787ee824-3e40-4929-9eda-a58528843d28" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" exitCode=0 Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.084867 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.084946 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerStarted","Data":"214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.092002 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerStarted","Data":"27ba6061488400cbbe9425311565331cd7deab39daabc93870a7db8265dd0abd"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.145359 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.173724 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:43 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:43 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:43 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.173789 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.179826 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.180793 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.68077917 +0000 UTC m=+127.195236905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.236367 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.250154 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.283466 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.283724 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.783705647 +0000 UTC m=+127.298163382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.314177 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.344330 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384794 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384846 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384923 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384968 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.385262 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.885248818 +0000 UTC m=+127.399706553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486334 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486743 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486809 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.487458 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.987431875 +0000 UTC m=+127.501889610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.487544 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.487960 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.533085 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.588028 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.588396 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.08838321 +0000 UTC m=+127.602840945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.661860 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.691583 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.692029 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.192003595 +0000 UTC m=+127.706461330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.797009 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.797594 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.29756219 +0000 UTC m=+127.812019925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.840152 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.897977 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.898746 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.398728911 +0000 UTC m=+127.913186646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.944800 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.016387 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.017075 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.51706047 +0000 UTC m=+128.031518205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.110212 5121 generic.go:358] "Generic (PLEG): container finished" podID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerID="c4d93596cf85a366d9c65b18cbb57b1a0ef35f70632d1c0e63459b646c98d329" exitCode=0 Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.110363 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"60adf0de-2267-4a37-abc8-6b97aec2d3bd","Type":"ContainerDied","Data":"c4d93596cf85a366d9c65b18cbb57b1a0ef35f70632d1c0e63459b646c98d329"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.115550 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerStarted","Data":"56c8a8963e5b040d645fb93297685a7c23c9aed3a624ad9d598fe6b751444411"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.117787 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.119293 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.619264828 +0000 UTC m=+128.133722573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.127580 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerStarted","Data":"2acd9157a5c0303ad67f67ca0941df951cb9a99c9745a061c1e6e8e477768d5b"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.131909 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerStarted","Data":"d90fd19bec269295dcd896d5064cd72d8b3eeb6792e85da08c508892c9638ff0"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.138245 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerID="c8b0a21164d8ece6155198a8b8edd86920256bb3faa893f125478334fe3d3643" exitCode=0 Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.138360 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"c8b0a21164d8ece6155198a8b8edd86920256bb3faa893f125478334fe3d3643"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.173912 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:44 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:44 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:44 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.174037 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.221306 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.222442 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.72240896 +0000 UTC m=+128.236866705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.323311 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.323702 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.823678894 +0000 UTC m=+128.338136639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.425300 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.425699 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.925682776 +0000 UTC m=+128.440140511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.527063 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.527784 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.027767381 +0000 UTC m=+128.542225116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.628714 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.631068 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.130977074 +0000 UTC m=+128.645434849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.695045 5121 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.730068 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.730494 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.230477472 +0000 UTC m=+128.744935207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.832365 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.832937 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.332915686 +0000 UTC m=+128.847373431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.933464 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.933754 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.433729497 +0000 UTC m=+128.948187242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.934075 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.934634 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.43460994 +0000 UTC m=+128.949067685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.035421 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.035545 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.535523194 +0000 UTC m=+129.049980939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.035813 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.036122 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.53611251 +0000 UTC m=+129.050570245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.137416 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.137706 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.6376365 +0000 UTC m=+129.152094255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.138369 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.138971 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.638945324 +0000 UTC m=+129.153403059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.146328 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" exitCode=0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.146518 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.152693 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"692bc3c6a0c9f154af5247c146d77f1e40ea74f3a51c3785fd56973022f501b1"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.152744 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"31b268e598ebe0e3aa1422ac68e2eaa20287c62b44f3e830ae2d03dc9801d804"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.152756 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"6afbaa90e25ddf368cb989cdc901e02c04abb2e2540a06a1dfeea5d5df7c10e9"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.155561 5121 generic.go:358] "Generic (PLEG): container finished" podID="194e426f-840b-4660-a161-f7a65ea58876" containerID="56c8a8963e5b040d645fb93297685a7c23c9aed3a624ad9d598fe6b751444411" exitCode=0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.155793 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerDied","Data":"56c8a8963e5b040d645fb93297685a7c23c9aed3a624ad9d598fe6b751444411"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.159359 5121 generic.go:358] "Generic (PLEG): container finished" podID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerID="9dab05515e6db77b43d60e41519ec993edf909177c201915f71ceb9b10cf035c" exitCode=0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.159612 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"9dab05515e6db77b43d60e41519ec993edf909177c201915f71ceb9b10cf035c"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.171133 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.177470 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.195459 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" podStartSLOduration=16.195423068 podStartE2EDuration="16.195423068s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:45.192003609 +0000 UTC m=+128.706461384" watchObservedRunningTime="2026-02-18 00:10:45.195423068 +0000 UTC m=+128.709880803" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.242077 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.242313 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.742268662 +0000 UTC m=+129.256726407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.243338 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.244835 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.744808848 +0000 UTC m=+129.259266763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.344269 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.344928 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.84490316 +0000 UTC m=+129.359360905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.445748 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.445677 5121 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T00:10:44.695101879Z","UUID":"4f5d7681-a594-439d-adc5-2dd55e131103","Handler":null,"Name":"","Endpoint":""} Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.446148 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.946133832 +0000 UTC m=+129.460591567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.458986 5121 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.459046 5121 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.480981 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.532860 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547181 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"194e426f-840b-4660-a161-f7a65ea58876\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547330 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "194e426f-840b-4660-a161-f7a65ea58876" (UID: "194e426f-840b-4660-a161-f7a65ea58876"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547429 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547469 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"194e426f-840b-4660-a161-f7a65ea58876\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.548473 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.553711 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.556879 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "194e426f-840b-4660-a161-f7a65ea58876" (UID: "194e426f-840b-4660-a161-f7a65ea58876"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.609408 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.613131 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.622744 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.622861 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649556 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649697 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649871 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649862 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60adf0de-2267-4a37-abc8-6b97aec2d3bd" (UID: "60adf0de-2267-4a37-abc8-6b97aec2d3bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649932 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.654541 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60adf0de-2267-4a37-abc8-6b97aec2d3bd" (UID: "60adf0de-2267-4a37-abc8-6b97aec2d3bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.655564 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.655638 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.694535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.751322 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.751353 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.829959 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.838184 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.118660 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.174952 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"60adf0de-2267-4a37-abc8-6b97aec2d3bd","Type":"ContainerDied","Data":"183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017"} Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.174999 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.175041 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.181058 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerDied","Data":"27ba6061488400cbbe9425311565331cd7deab39daabc93870a7db8265dd0abd"} Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.181079 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ba6061488400cbbe9425311565331cd7deab39daabc93870a7db8265dd0abd" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.181178 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.183001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerStarted","Data":"51cf34af5f3e60547305a8dcaaf837202c7932c821c7bc1d4c4374385f24b01a"} Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.194607 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerStarted","Data":"3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521"} Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.194912 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.215562 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" podStartSLOduration=109.215522909 podStartE2EDuration="1m49.215522909s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:47.213099856 +0000 UTC m=+130.727557601" watchObservedRunningTime="2026-02-18 00:10:47.215522909 +0000 UTC m=+130.729980644" Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.280440 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.040736 5121 ???:1] "http: TLS handshake error from 192.168.126.11:35614: no serving certificate available for the kubelet" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.270415 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.793807 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.794253 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.808817 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.344561 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.354704 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.378625 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.378752 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:54 crc kubenswrapper[5121]: I0218 00:10:54.856620 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.614218 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.616883 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.621339 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.621418 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 18 00:10:58 crc kubenswrapper[5121]: I0218 00:10:58.313932 5121 ???:1] "http: TLS handshake error from 192.168.126.11:51194: no serving certificate available for the kubelet" Feb 18 00:10:58 crc kubenswrapper[5121]: I0218 00:10:58.792424 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:58 crc kubenswrapper[5121]: I0218 00:10:58.792568 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.378923 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.379362 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.379446 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.380437 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.380533 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.380993 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7"} pod="openshift-console/downloads-747b44746d-mkw5h" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.381127 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" containerID="cri-o://f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7" gracePeriod=2 Feb 18 00:11:03 crc kubenswrapper[5121]: I0218 00:11:03.339230 5121 generic.go:358] "Generic (PLEG): container finished" podID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerID="f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7" exitCode=0 Feb 18 00:11:03 crc kubenswrapper[5121]: I0218 00:11:03.339338 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerDied","Data":"f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7"} Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.611046 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.612839 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.614509 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.614557 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 18 00:11:08 crc kubenswrapper[5121]: I0218 00:11:08.237137 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:11:08 crc kubenswrapper[5121]: I0218 00:11:08.801232 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.379621 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jc5sl_9b4e56ad-da89-4541-842d-17ba2d9bcb0a/kube-multus-additional-cni-plugins/0.log" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.379684 5121 generic.go:358] "Generic (PLEG): container finished" podID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" exitCode=137 Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.379738 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerDied","Data":"1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82"} Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.787237 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jc5sl_9b4e56ad-da89-4541-842d-17ba2d9bcb0a/kube-multus-additional-cni-plugins/0.log" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.787343 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.806417 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.857681 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858735 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858822 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858676 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready" (OuterVolumeSpecName: "ready") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858898 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858935 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.860276 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.861348 5121 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.861388 5121 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.861403 5121 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.872600 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26" (OuterVolumeSpecName: "kube-api-access-zkf26") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "kube-api-access-zkf26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.962636 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.386673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerStarted","Data":"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.389707 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerStarted","Data":"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.391983 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerStarted","Data":"eb9268643d3ff2db1eac72e807eac3f882e46944a235cf46adeedecbbbce82b9"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.392617 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.394276 5121 generic.go:358] "Generic (PLEG): container finished" podID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.394378 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.395120 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.395171 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.399365 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerStarted","Data":"3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.401496 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerStarted","Data":"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.404084 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerID="2bdec3bd513a3c658e9ca8badc9950ba33045d33e3d17857b745d9f73b431c61" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.404184 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"2bdec3bd513a3c658e9ca8badc9950ba33045d33e3d17857b745d9f73b431c61"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.407537 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.415271 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418342 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jc5sl_9b4e56ad-da89-4541-842d-17ba2d9bcb0a/kube-multus-additional-cni-plugins/0.log" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418543 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerDied","Data":"ce83ab25e1e8e9f955af7b1409e400ceb125028d31573c59d7119d8ace62ac10"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418594 5121 scope.go:117] "RemoveContainer" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418798 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.427599 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.428219 5121 generic.go:358] "Generic (PLEG): container finished" podID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.428345 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.440535 5121 generic.go:358] "Generic (PLEG): container finished" podID="787ee824-3e40-4929-9eda-a58528843d28" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.440868 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.613884 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.613867974 podStartE2EDuration="1m6.613867974s" podCreationTimestamp="2026-02-18 00:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:11:10.611242286 +0000 UTC m=+154.125700011" watchObservedRunningTime="2026-02-18 00:11:10.613867974 +0000 UTC m=+154.128325709" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.682244 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.693371 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.280729 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" path="/var/lib/kubelet/pods/9b4e56ad-da89-4541-842d-17ba2d9bcb0a/volumes" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.466909 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.467074 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.473696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerStarted","Data":"3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.478924 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerStarted","Data":"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.481901 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerStarted","Data":"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.483763 5121 generic.go:358] "Generic (PLEG): container finished" podID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.483845 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.486398 5121 generic.go:358] "Generic (PLEG): container finished" podID="af92a560-a657-450c-b3ad-baa6233127aa" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.486533 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.494621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerStarted","Data":"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.500270 5121 generic.go:358] "Generic (PLEG): container finished" podID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerID="3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.500335 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.501924 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.501973 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.513594 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q4gm2" podStartSLOduration=3.844960007 podStartE2EDuration="30.513566111s" podCreationTimestamp="2026-02-18 00:10:41 +0000 UTC" firstStartedPulling="2026-02-18 00:10:43.08573329 +0000 UTC m=+126.600191025" lastFinishedPulling="2026-02-18 00:11:09.754339394 +0000 UTC m=+153.268797129" observedRunningTime="2026-02-18 00:11:11.508981782 +0000 UTC m=+155.023439557" watchObservedRunningTime="2026-02-18 00:11:11.513566111 +0000 UTC m=+155.028023916" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.548626 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttn8q" podStartSLOduration=4.770357655 podStartE2EDuration="32.548604735s" podCreationTimestamp="2026-02-18 00:10:39 +0000 UTC" firstStartedPulling="2026-02-18 00:10:41.974990265 +0000 UTC m=+125.489448000" lastFinishedPulling="2026-02-18 00:11:09.753237345 +0000 UTC m=+153.267695080" observedRunningTime="2026-02-18 00:11:11.547496076 +0000 UTC m=+155.061953821" watchObservedRunningTime="2026-02-18 00:11:11.548604735 +0000 UTC m=+155.063062480" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.573230 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fp6mh" podStartSLOduration=4.905780738 podStartE2EDuration="30.573202347s" podCreationTimestamp="2026-02-18 00:10:41 +0000 UTC" firstStartedPulling="2026-02-18 00:10:44.141455207 +0000 UTC m=+127.655912942" lastFinishedPulling="2026-02-18 00:11:09.808876806 +0000 UTC m=+153.323334551" observedRunningTime="2026-02-18 00:11:11.568907395 +0000 UTC m=+155.083365150" watchObservedRunningTime="2026-02-18 00:11:11.573202347 +0000 UTC m=+155.087660112" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.604971 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-czgg8" podStartSLOduration=3.828734429 podStartE2EDuration="31.604945214s" podCreationTimestamp="2026-02-18 00:10:40 +0000 UTC" firstStartedPulling="2026-02-18 00:10:42.01653424 +0000 UTC m=+125.530991975" lastFinishedPulling="2026-02-18 00:11:09.792745025 +0000 UTC m=+153.307202760" observedRunningTime="2026-02-18 00:11:11.602973833 +0000 UTC m=+155.117431578" watchObservedRunningTime="2026-02-18 00:11:11.604945214 +0000 UTC m=+155.119402949" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.812778 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.813035 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.239094 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.239627 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.378237 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.378344 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.509199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerStarted","Data":"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.511655 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerStarted","Data":"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.513791 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerStarted","Data":"2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.516754 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerStarted","Data":"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.531999 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rdts" podStartSLOduration=5.709944024 podStartE2EDuration="33.531977434s" podCreationTimestamp="2026-02-18 00:10:39 +0000 UTC" firstStartedPulling="2026-02-18 00:10:41.98587316 +0000 UTC m=+125.500330895" lastFinishedPulling="2026-02-18 00:11:09.80790657 +0000 UTC m=+153.322364305" observedRunningTime="2026-02-18 00:11:12.529062488 +0000 UTC m=+156.043520233" watchObservedRunningTime="2026-02-18 00:11:12.531977434 +0000 UTC m=+156.046435169" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.554016 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6rwlx" podStartSLOduration=4.891257335 podStartE2EDuration="29.553994548s" podCreationTimestamp="2026-02-18 00:10:43 +0000 UTC" firstStartedPulling="2026-02-18 00:10:45.147701533 +0000 UTC m=+128.662159268" lastFinishedPulling="2026-02-18 00:11:09.810438746 +0000 UTC m=+153.324896481" observedRunningTime="2026-02-18 00:11:12.551555364 +0000 UTC m=+156.066013099" watchObservedRunningTime="2026-02-18 00:11:12.553994548 +0000 UTC m=+156.068452293" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.348581 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q4gm2" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" probeResult="failure" output=< Feb 18 00:11:13 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:11:13 crc kubenswrapper[5121]: > Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.353328 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fp6mh" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" probeResult="failure" output=< Feb 18 00:11:13 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:11:13 crc kubenswrapper[5121]: > Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.559094 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xlq58" podStartSLOduration=6.759240841 podStartE2EDuration="34.559076793s" podCreationTimestamp="2026-02-18 00:10:39 +0000 UTC" firstStartedPulling="2026-02-18 00:10:41.99547972 +0000 UTC m=+125.509937455" lastFinishedPulling="2026-02-18 00:11:09.795315672 +0000 UTC m=+153.309773407" observedRunningTime="2026-02-18 00:11:13.552584134 +0000 UTC m=+157.067041879" watchObservedRunningTime="2026-02-18 00:11:13.559076793 +0000 UTC m=+157.073534538" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.580171 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvff2" podStartSLOduration=6.947545745 podStartE2EDuration="31.580152423s" podCreationTimestamp="2026-02-18 00:10:42 +0000 UTC" firstStartedPulling="2026-02-18 00:10:45.160720843 +0000 UTC m=+128.675178578" lastFinishedPulling="2026-02-18 00:11:09.793327521 +0000 UTC m=+153.307785256" observedRunningTime="2026-02-18 00:11:13.575995915 +0000 UTC m=+157.090453650" watchObservedRunningTime="2026-02-18 00:11:13.580152423 +0000 UTC m=+157.094610168" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.662174 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.662257 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:14 crc kubenswrapper[5121]: I0218 00:11:14.706190 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6rwlx" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" probeResult="failure" output=< Feb 18 00:11:14 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:11:14 crc kubenswrapper[5121]: > Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.819147 5121 ???:1] "http: TLS handshake error from 192.168.126.11:43990: no serving certificate available for the kubelet" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.832784 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833678 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="194e426f-840b-4660-a161-f7a65ea58876" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833706 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="194e426f-840b-4660-a161-f7a65ea58876" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833726 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833734 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833778 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833788 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833903 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833919 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833929 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="194e426f-840b-4660-a161-f7a65ea58876" containerName="pruner" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.113872 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.114087 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.116980 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.120858 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.223799 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.224213 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.326539 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.326695 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.326742 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.362725 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.435093 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.851536 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.853443 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.952786 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.983838 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 18 00:11:19 crc kubenswrapper[5121]: W0218 00:11:19.987296 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeddfcdef_6299_4eae_b4a2_6a5d3b5f41be.slice/crio-4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485 WatchSource:0}: Error finding container 4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485: Status 404 returned error can't find the container with id 4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485 Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.001187 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.001254 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.068929 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.240539 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.240771 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.323195 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.323540 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.566288 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerStarted","Data":"4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485"} Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.602268 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.611241 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.799894 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.800191 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.869241 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.523889 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.575092 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerStarted","Data":"be815f92fd88b62ca89eeb66f9a84a2a9ed332d299a7b6fb657ba21e9566640a"} Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.602464 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=3.602445086 podStartE2EDuration="3.602445086s" podCreationTimestamp="2026-02-18 00:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:11:21.600134006 +0000 UTC m=+165.114591751" watchObservedRunningTime="2026-02-18 00:11:21.602445086 +0000 UTC m=+165.116902851" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.620033 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.634538 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.848605 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.894942 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.014710 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.279151 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.318826 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.424720 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.431959 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.443184 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.482323 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.482607 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.483017 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.582510 5121 generic.go:358] "Generic (PLEG): container finished" podID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerID="be815f92fd88b62ca89eeb66f9a84a2a9ed332d299a7b6fb657ba21e9566640a" exitCode=0 Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.582618 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerDied","Data":"be815f92fd88b62ca89eeb66f9a84a2a9ed332d299a7b6fb657ba21e9566640a"} Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584431 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584488 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584506 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584559 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584726 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.601563 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.610675 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.750567 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.950256 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.237570 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.237854 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.294868 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.591608 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerStarted","Data":"4eb115528e4fd974007b9bde92fbada37ac8156d5ea4611a9c0460525bffd207"} Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.591910 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerStarted","Data":"e3d282b42b1bc1c669b232646281af0e365c60d08a476eec16dcbde26bb4f8db"} Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.591937 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xlq58" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" containerID="cri-o://b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" gracePeriod=2 Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.592864 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-czgg8" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" containerID="cri-o://5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" gracePeriod=2 Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.649483 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.685250 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.68522448 podStartE2EDuration="1.68522448s" podCreationTimestamp="2026-02-18 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:11:23.631266673 +0000 UTC m=+167.145724458" watchObservedRunningTime="2026-02-18 00:11:23.68522448 +0000 UTC m=+167.199682205" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.730651 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.783022 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.890223 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012044 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012538 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012145 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" (UID: "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012890 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.015323 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.018809 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" (UID: "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.023666 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.113963 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"93fd39e7-abb5-409e-8eed-e7757f484c00\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114067 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"93fd39e7-abb5-409e-8eed-e7757f484c00\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114110 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"af92a560-a657-450c-b3ad-baa6233127aa\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114191 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"af92a560-a657-450c-b3ad-baa6233127aa\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114280 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"af92a560-a657-450c-b3ad-baa6233127aa\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114322 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"93fd39e7-abb5-409e-8eed-e7757f484c00\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114560 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.115223 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities" (OuterVolumeSpecName: "utilities") pod "af92a560-a657-450c-b3ad-baa6233127aa" (UID: "af92a560-a657-450c-b3ad-baa6233127aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.115315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities" (OuterVolumeSpecName: "utilities") pod "93fd39e7-abb5-409e-8eed-e7757f484c00" (UID: "93fd39e7-abb5-409e-8eed-e7757f484c00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.120522 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr" (OuterVolumeSpecName: "kube-api-access-xmbkr") pod "af92a560-a657-450c-b3ad-baa6233127aa" (UID: "af92a560-a657-450c-b3ad-baa6233127aa"). InnerVolumeSpecName "kube-api-access-xmbkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.127391 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k" (OuterVolumeSpecName: "kube-api-access-r489k") pod "93fd39e7-abb5-409e-8eed-e7757f484c00" (UID: "93fd39e7-abb5-409e-8eed-e7757f484c00"). InnerVolumeSpecName "kube-api-access-r489k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.149919 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93fd39e7-abb5-409e-8eed-e7757f484c00" (UID: "93fd39e7-abb5-409e-8eed-e7757f484c00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.179087 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af92a560-a657-450c-b3ad-baa6233127aa" (UID: "af92a560-a657-450c-b3ad-baa6233127aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215877 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215926 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215946 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215959 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215970 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215982 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.403216 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.403912 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fp6mh" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" containerID="cri-o://3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11" gracePeriod=2 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612562 5121 generic.go:358] "Generic (PLEG): container finished" podID="af92a560-a657-450c-b3ad-baa6233127aa" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" exitCode=0 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612711 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612779 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"68089a9179b2ee54313136fab6546d018047ab31029619dfc6933c6ec3ac176c"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612803 5121 scope.go:117] "RemoveContainer" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612880 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.618645 5121 generic.go:358] "Generic (PLEG): container finished" podID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" exitCode=0 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.618888 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.618953 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"e3aa645abbf5b996b104f5c41a2f1ccc97cd615ef2eb0ff0e26a4d5ea630790e"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.619131 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.651427 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerID="3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11" exitCode=0 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.651497 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.656041 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.656264 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.658837 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerDied","Data":"4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.658879 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.669238 5121 scope.go:117] "RemoveContainer" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.671133 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.680498 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.692469 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.706876 5121 scope.go:117] "RemoveContainer" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.728980 5121 scope.go:117] "RemoveContainer" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.729692 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca\": container with ID starting with b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca not found: ID does not exist" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.729824 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca"} err="failed to get container status \"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca\": rpc error: code = NotFound desc = could not find container \"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca\": container with ID starting with b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.729972 5121 scope.go:117] "RemoveContainer" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.730401 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717\": container with ID starting with 8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717 not found: ID does not exist" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730443 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717"} err="failed to get container status \"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717\": rpc error: code = NotFound desc = could not find container \"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717\": container with ID starting with 8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730477 5121 scope.go:117] "RemoveContainer" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.730864 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54\": container with ID starting with 08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54 not found: ID does not exist" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730910 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54"} err="failed to get container status \"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54\": rpc error: code = NotFound desc = could not find container \"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54\": container with ID starting with 08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730942 5121 scope.go:117] "RemoveContainer" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.773859 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.777018 5121 scope.go:117] "RemoveContainer" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.804481 5121 scope.go:117] "RemoveContainer" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.834924 5121 scope.go:117] "RemoveContainer" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.835326 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26\": container with ID starting with 5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26 not found: ID does not exist" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835377 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835435 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835493 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835372 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26"} err="failed to get container status \"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26\": rpc error: code = NotFound desc = could not find container \"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26\": container with ID starting with 5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835582 5121 scope.go:117] "RemoveContainer" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.836608 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities" (OuterVolumeSpecName: "utilities") pod "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" (UID: "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.836724 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844\": container with ID starting with ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844 not found: ID does not exist" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.836752 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844"} err="failed to get container status \"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844\": rpc error: code = NotFound desc = could not find container \"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844\": container with ID starting with ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.836772 5121 scope.go:117] "RemoveContainer" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.837005 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48\": container with ID starting with c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48 not found: ID does not exist" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.837022 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48"} err="failed to get container status \"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48\": rpc error: code = NotFound desc = could not find container \"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48\": container with ID starting with c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.845349 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8" (OuterVolumeSpecName: "kube-api-access-w89r8") pod "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" (UID: "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff"). InnerVolumeSpecName "kube-api-access-w89r8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.851477 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" (UID: "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.936888 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.936936 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.936947 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.279872 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" path="/var/lib/kubelet/pods/93fd39e7-abb5-409e-8eed-e7757f484c00/volumes" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.281080 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af92a560-a657-450c-b3ad-baa6233127aa" path="/var/lib/kubelet/pods/af92a560-a657-450c-b3ad-baa6233127aa/volumes" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.666265 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34"} Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.666320 5121 scope.go:117] "RemoveContainer" containerID="3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.666457 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.686429 5121 scope.go:117] "RemoveContainer" containerID="2bdec3bd513a3c658e9ca8badc9950ba33045d33e3d17857b745d9f73b431c61" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.687106 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.690671 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.702482 5121 scope.go:117] "RemoveContainer" containerID="c8b0a21164d8ece6155198a8b8edd86920256bb3faa893f125478334fe3d3643" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.002323 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.003481 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6rwlx" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" containerID="cri-o://a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" gracePeriod=2 Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.279002 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" path="/var/lib/kubelet/pods/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff/volumes" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.370422 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.472401 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"d5917f75-6117-4adb-a85e-6d40a331ef66\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.472485 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"d5917f75-6117-4adb-a85e-6d40a331ef66\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.472813 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"d5917f75-6117-4adb-a85e-6d40a331ef66\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.473863 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities" (OuterVolumeSpecName: "utilities") pod "d5917f75-6117-4adb-a85e-6d40a331ef66" (UID: "d5917f75-6117-4adb-a85e-6d40a331ef66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.487979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw" (OuterVolumeSpecName: "kube-api-access-vkqfw") pod "d5917f75-6117-4adb-a85e-6d40a331ef66" (UID: "d5917f75-6117-4adb-a85e-6d40a331ef66"). InnerVolumeSpecName "kube-api-access-vkqfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.574880 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.574918 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.580745 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5917f75-6117-4adb-a85e-6d40a331ef66" (UID: "d5917f75-6117-4adb-a85e-6d40a331ef66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.676358 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697764 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" exitCode=0 Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697868 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697862 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c"} Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697979 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"d90fd19bec269295dcd896d5064cd72d8b3eeb6792e85da08c508892c9638ff0"} Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697999 5121 scope.go:117] "RemoveContainer" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.716941 5121 scope.go:117] "RemoveContainer" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.735158 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.739100 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.758549 5121 scope.go:117] "RemoveContainer" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.775169 5121 scope.go:117] "RemoveContainer" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" Feb 18 00:11:27 crc kubenswrapper[5121]: E0218 00:11:27.775633 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c\": container with ID starting with a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c not found: ID does not exist" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.775729 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c"} err="failed to get container status \"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c\": rpc error: code = NotFound desc = could not find container \"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c\": container with ID starting with a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c not found: ID does not exist" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.775758 5121 scope.go:117] "RemoveContainer" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" Feb 18 00:11:27 crc kubenswrapper[5121]: E0218 00:11:27.776023 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935\": container with ID starting with 0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935 not found: ID does not exist" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.776099 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935"} err="failed to get container status \"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935\": rpc error: code = NotFound desc = could not find container \"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935\": container with ID starting with 0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935 not found: ID does not exist" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.776127 5121 scope.go:117] "RemoveContainer" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" Feb 18 00:11:27 crc kubenswrapper[5121]: E0218 00:11:27.776364 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494\": container with ID starting with 780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494 not found: ID does not exist" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.776389 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494"} err="failed to get container status \"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494\": rpc error: code = NotFound desc = could not find container \"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494\": container with ID starting with 780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494 not found: ID does not exist" Feb 18 00:11:29 crc kubenswrapper[5121]: I0218 00:11:29.279742 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" path="/var/lib/kubelet/pods/d5917f75-6117-4adb-a85e-6d40a331ef66/volumes" Feb 18 00:11:49 crc kubenswrapper[5121]: I0218 00:11:49.414675 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:11:59 crc kubenswrapper[5121]: I0218 00:11:59.805997 5121 ???:1] "http: TLS handshake error from 192.168.126.11:37704: no serving certificate available for the kubelet" Feb 18 00:12:01 crc kubenswrapper[5121]: E0218 00:12:01.375231 5121 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.377576 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379357 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379426 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379453 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379465 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379515 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379530 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379548 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379560 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379616 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379628 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379643 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379703 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379721 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379732 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379748 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379791 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379814 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerName="pruner" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379826 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerName="pruner" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379883 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379896 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379918 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379930 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379994 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380014 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380042 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380057 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380414 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380474 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380500 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerName="pruner" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380518 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380568 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405035 5121 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405140 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405178 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405913 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406064 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406135 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406147 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406162 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406259 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406163 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406421 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406445 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406467 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406479 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406498 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406511 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406564 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406575 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406588 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406600 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406617 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406631 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406700 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406712 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407048 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407085 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407099 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407112 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407126 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407145 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407164 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407443 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407466 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407482 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407495 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407778 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407803 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.417845 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.432478 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.460372 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.581686 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582144 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582161 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582189 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582210 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582246 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582268 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582298 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582313 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582332 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683473 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683527 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683549 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683578 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683666 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683692 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683712 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683765 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683842 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683971 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684042 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684067 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684108 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684138 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684163 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684382 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684409 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.755409 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: E0218 00:12:01.776595 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18952ed539fccf63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,LastTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.934421 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"81d28147ad2807675d04e889a96b71e411c71303aba30f03797aa880c74c1b14"} Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.937032 5121 generic.go:358] "Generic (PLEG): container finished" podID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerID="4eb115528e4fd974007b9bde92fbada37ac8156d5ea4611a9c0460525bffd207" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.937147 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerDied","Data":"4eb115528e4fd974007b9bde92fbada37ac8156d5ea4611a9c0460525bffd207"} Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.938605 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.938979 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.940856 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.942580 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944299 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944326 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944336 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944347 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" exitCode=2 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944435 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.049195 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.049741 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.050405 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.050720 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.050969 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.051000 5121 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.051188 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.252508 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.653864 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.955386 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.958640 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513"} Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.959484 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.960235 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.277823 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.279284 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.279837 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409260 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"faf5ed14-3492-463d-bc62-731d0d1e198e\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409383 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"faf5ed14-3492-463d-bc62-731d0d1e198e\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409381 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock" (OuterVolumeSpecName: "var-lock") pod "faf5ed14-3492-463d-bc62-731d0d1e198e" (UID: "faf5ed14-3492-463d-bc62-731d0d1e198e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409438 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"faf5ed14-3492-463d-bc62-731d0d1e198e\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409499 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "faf5ed14-3492-463d-bc62-731d0d1e198e" (UID: "faf5ed14-3492-463d-bc62-731d0d1e198e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.410373 5121 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.410410 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.418816 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "faf5ed14-3492-463d-bc62-731d0d1e198e" (UID: "faf5ed14-3492-463d-bc62-731d0d1e198e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:12:03 crc kubenswrapper[5121]: E0218 00:12:03.455326 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.511620 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.912683 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.914171 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.914925 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.915344 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.915849 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.967153 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerDied","Data":"e3d282b42b1bc1c669b232646281af0e365c60d08a476eec16dcbde26bb4f8db"} Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.967212 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d282b42b1bc1c669b232646281af0e365c60d08a476eec16dcbde26bb4f8db" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.967210 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.971263 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.972311 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" exitCode=0 Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.972445 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.972517 5121 scope.go:117] "RemoveContainer" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.993280 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.993880 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.994327 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.998743 5121 scope.go:117] "RemoveContainer" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021055 5121 scope.go:117] "RemoveContainer" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021802 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021803 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021869 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021954 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021998 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022020 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022037 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022164 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022452 5121 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022513 5121 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022526 5121 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022802 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.027897 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.036397 5121 scope.go:117] "RemoveContainer" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.050422 5121 scope.go:117] "RemoveContainer" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.067064 5121 scope.go:117] "RemoveContainer" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.123605 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.123714 5121 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.140932 5121 scope.go:117] "RemoveContainer" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.141421 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de\": container with ID starting with c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de not found: ID does not exist" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.141488 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de"} err="failed to get container status \"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de\": rpc error: code = NotFound desc = could not find container \"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de\": container with ID starting with c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.141530 5121 scope.go:117] "RemoveContainer" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.141937 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\": container with ID starting with 4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc not found: ID does not exist" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.141983 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc"} err="failed to get container status \"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\": rpc error: code = NotFound desc = could not find container \"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\": container with ID starting with 4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142007 5121 scope.go:117] "RemoveContainer" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.142282 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\": container with ID starting with b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e not found: ID does not exist" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142340 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e"} err="failed to get container status \"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\": rpc error: code = NotFound desc = could not find container \"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\": container with ID starting with b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142373 5121 scope.go:117] "RemoveContainer" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.142629 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\": container with ID starting with 3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0 not found: ID does not exist" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142679 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0"} err="failed to get container status \"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\": rpc error: code = NotFound desc = could not find container \"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\": container with ID starting with 3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0 not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142695 5121 scope.go:117] "RemoveContainer" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.142960 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\": container with ID starting with ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9 not found: ID does not exist" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.143003 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9"} err="failed to get container status \"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\": rpc error: code = NotFound desc = could not find container \"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\": container with ID starting with ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9 not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.143028 5121 scope.go:117] "RemoveContainer" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.143286 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\": container with ID starting with 02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc not found: ID does not exist" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.143313 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc"} err="failed to get container status \"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\": rpc error: code = NotFound desc = could not find container \"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\": container with ID starting with 02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.303192 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.303758 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.304181 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:05 crc kubenswrapper[5121]: E0218 00:12:05.056599 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Feb 18 00:12:05 crc kubenswrapper[5121]: I0218 00:12:05.278047 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 18 00:12:07 crc kubenswrapper[5121]: I0218 00:12:07.277876 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:07 crc kubenswrapper[5121]: I0218 00:12:07.278167 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:08 crc kubenswrapper[5121]: E0218 00:12:08.258545 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="6.4s" Feb 18 00:12:10 crc kubenswrapper[5121]: E0218 00:12:10.370799 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18952ed539fccf63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,LastTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.062576 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.062935 5121 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119" exitCode=1 Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.062997 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119"} Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.063761 5121 scope.go:117] "RemoveContainer" containerID="33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.064356 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.065156 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.065904 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.475196 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" containerID="cri-o://76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" gracePeriod=15 Feb 18 00:12:14 crc kubenswrapper[5121]: E0218 00:12:14.659920 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="7s" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.898547 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.899489 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.900009 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.900279 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.900562 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984776 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984883 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984932 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984955 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985026 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985167 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985593 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985619 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985642 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985692 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985798 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985813 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985873 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985920 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985980 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985990 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986566 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986933 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986969 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986991 5121 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.987433 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.987900 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.993863 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c" (OuterVolumeSpecName: "kube-api-access-hgp4c") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "kube-api-access-hgp4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.994422 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.994992 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.995633 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.998389 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.998925 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.999497 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:14.999990 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.000240 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.073922 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.074082 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a994f43d642a705311d5e65d88f6f4804223e5f90573b51426bf56f7acbbd43c"} Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075469 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075791 5121 generic.go:358] "Generic (PLEG): container finished" podID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" exitCode=0 Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075874 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075910 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerDied","Data":"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4"} Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075965 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerDied","Data":"7bc05f9957f09f27cee7504d54470ecd9c12fb4c5e2801caea1078ac4942d85e"} Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075958 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075996 5121 scope.go:117] "RemoveContainer" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.076472 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.076846 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.077284 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.077577 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.077923 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.078273 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088214 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088242 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088254 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088266 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088277 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088286 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088297 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088308 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088318 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088328 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088340 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.103569 5121 scope.go:117] "RemoveContainer" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" Feb 18 00:12:15 crc kubenswrapper[5121]: E0218 00:12:15.103985 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4\": container with ID starting with 76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4 not found: ID does not exist" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.104021 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4"} err="failed to get container status \"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4\": rpc error: code = NotFound desc = could not find container \"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4\": container with ID starting with 76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4 not found: ID does not exist" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.110788 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.111793 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.112397 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.112849 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.270591 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.272744 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.273822 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.274762 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.275391 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.295506 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.295574 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:16 crc kubenswrapper[5121]: E0218 00:12:16.296435 5121 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.296958 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:16 crc kubenswrapper[5121]: W0218 00:12:16.332728 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39 WatchSource:0}: Error finding container 74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39: Status 404 returned error can't find the container with id 74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39 Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.101506 5121 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="ac1a72c0c5c278f20dbc3c4c72881272e9e0be75d7ca2779356b125b5a60949c" exitCode=0 Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.101718 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"ac1a72c0c5c278f20dbc3c4c72881272e9e0be75d7ca2779356b125b5a60949c"} Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.102065 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39"} Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.102635 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.102832 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:17 crc kubenswrapper[5121]: E0218 00:12:17.103558 5121 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.103593 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.104280 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.104675 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.105110 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.284896 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.285635 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.286858 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.287876 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.288463 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.918925 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.923847 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:18 crc kubenswrapper[5121]: I0218 00:12:18.124785 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"797b35d71eacde3014a22c3b6925a5573cec902ea796336f0eb7a99594ef5b18"} Feb 18 00:12:18 crc kubenswrapper[5121]: I0218 00:12:18.124829 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0627a53e2013a9609ac288d82ec104c66adf2f4affe28fb594643a1f1039e275"} Feb 18 00:12:18 crc kubenswrapper[5121]: I0218 00:12:18.124844 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.133425 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6b69b85da8f3de930f6019987007910dda487e14db4f3739da94ed0aa052090b"} Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.134958 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4ab93789cd85420e2eb73bbeec247813eb3b04296fe03601e5e141c99d58f846"} Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.135053 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"119f968bcc455ee0e9e5d7defdf75ad2b46f92777cf122d83ffbe5d44f9b9acf"} Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.135138 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.133761 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.135289 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:21 crc kubenswrapper[5121]: I0218 00:12:21.298118 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:21 crc kubenswrapper[5121]: I0218 00:12:21.298448 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:21 crc kubenswrapper[5121]: I0218 00:12:21.310558 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:24 crc kubenswrapper[5121]: I0218 00:12:24.167820 5121 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:24 crc kubenswrapper[5121]: I0218 00:12:24.168272 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:24 crc kubenswrapper[5121]: I0218 00:12:24.227546 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="22f0f020-6cd6-4056-9ee7-3a201b72fafc" Feb 18 00:12:25 crc kubenswrapper[5121]: I0218 00:12:25.174829 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:25 crc kubenswrapper[5121]: I0218 00:12:25.174876 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:25 crc kubenswrapper[5121]: I0218 00:12:25.181911 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:26 crc kubenswrapper[5121]: I0218 00:12:26.181203 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:26 crc kubenswrapper[5121]: I0218 00:12:26.181257 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:27 crc kubenswrapper[5121]: I0218 00:12:27.294111 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="22f0f020-6cd6-4056-9ee7-3a201b72fafc" Feb 18 00:12:29 crc kubenswrapper[5121]: I0218 00:12:29.142892 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:33 crc kubenswrapper[5121]: I0218 00:12:33.975962 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.251192 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.454477 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.502726 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.544333 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.544459 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.765627 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.913149 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.186806 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.299130 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.396538 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.738151 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.962374 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.074630 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.090267 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.196734 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.228716 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.331533 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.435303 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.517804 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.566474 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.696390 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.732765 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.785640 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.844697 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.863449 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.958718 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.041462 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.182953 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.301171 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.428086 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.432487 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.438078 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.510559 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.849774 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.926108 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.999757 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.089519 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.095026 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.186253 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.266788 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.272060 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.329321 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.357350 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.410134 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.516716 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.770932 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.783401 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.789426 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.821006 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.822331 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.891020 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.939277 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.090748 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.122029 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.130625 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.206806 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.246787 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.279397 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.306300 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.444476 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.536422 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.740882 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.743480 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.919830 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.136859 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.186851 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.309064 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.349896 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.508729 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.600980 5121 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.601480 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.616595 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.640946 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.647187 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.671541 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.678185 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.679405 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.695366 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.707386 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.714690 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.748638 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.812638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.839992 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.853547 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.999047 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.082381 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.195384 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.224390 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.243426 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.287607 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.306094 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.322941 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.348578 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.414886 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.451619 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.470775 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.607640 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.638585 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.651134 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.878859 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.917950 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.966959 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.969096 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.076228 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.134886 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.178611 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.243498 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.261702 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.317077 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.365206 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.498833 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.564152 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.604810 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.621673 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.626203 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.763084 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.795866 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.993701 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.166694 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.181793 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.193093 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.239869 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.285302 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.308028 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.347177 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.414354 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.488726 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.499040 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.521596 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.523302 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.542190 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.551298 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.585122 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.627538 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.800680 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.872795 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.890758 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.894437 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.912305 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.923539 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.978849 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.032256 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.061783 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.101731 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.150521 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.152592 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.172705 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.356055 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.388930 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.419310 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.429228 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.433545 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.499902 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.604663 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.730524 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.746167 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.753948 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.770473 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.791023 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.792546 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.873904 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.084289 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.143072 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.166022 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.179024 5121 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.206989 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.245682 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.460965 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.621163 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.763847 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.789447 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.856429 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.857093 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.869795 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.877319 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.924448 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.009075 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.109941 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.150847 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.178048 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.187148 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.229909 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.246033 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.441585 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.541417 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.551670 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.658163 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.658230 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.767891 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.824635 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.824947 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.876423 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.919871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.975300 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.975928 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.031050 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.110695 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.240928 5121 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.375730 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.436828 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.446599 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.497560 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.556638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.570981 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.659872 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.855638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.928644 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.939017 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.941921 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.017298 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.049707 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.058380 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.109880 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.185503 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.352891 5121 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.354763 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.354738884 podStartE2EDuration="47.354738884s" podCreationTimestamp="2026-02-18 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:12:24.183480181 +0000 UTC m=+227.697937916" watchObservedRunningTime="2026-02-18 00:12:48.354738884 +0000 UTC m=+251.869196649" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.360707 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.360784 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5598d4f74c-wh9tq"] Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361423 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361459 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361826 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361854 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361894 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerName="installer" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361906 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerName="installer" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.362056 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.362091 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerName="installer" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.374898 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.374967 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.376850 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.379927 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.379962 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380185 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380214 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380265 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380878 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381014 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381057 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381129 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381193 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381645 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.391776 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.404525 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.410395 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.41037239 podStartE2EDuration="24.41037239s" podCreationTimestamp="2026-02-18 00:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:12:48.405797548 +0000 UTC m=+251.920255323" watchObservedRunningTime="2026-02-18 00:12:48.41037239 +0000 UTC m=+251.924830165" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.457377 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.466905 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4tt\" (UniqueName: \"kubernetes.io/projected/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-kube-api-access-rm4tt\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.466967 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467066 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467131 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467201 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-policies\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467296 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467354 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-session\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467408 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467452 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-dir\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467548 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467637 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467734 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467776 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-login\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467826 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-error\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.475601 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.541798 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.562103 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569190 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569234 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-policies\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569275 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569306 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-session\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569330 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569634 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-dir\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569856 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569899 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569928 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-login\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569973 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-error\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570018 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4tt\" (UniqueName: \"kubernetes.io/projected/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-kube-api-access-rm4tt\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569984 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-dir\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570045 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570213 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.571196 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.572598 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.573539 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-policies\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.580015 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-login\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.580241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-session\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.581161 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.581742 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.581873 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.582341 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.587246 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-error\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.591343 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.600404 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4tt\" (UniqueName: \"kubernetes.io/projected/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-kube-api-access-rm4tt\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.607186 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.694057 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.882634 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.923282 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5598d4f74c-wh9tq"] Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.135202 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.286291 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" path="/var/lib/kubelet/pods/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b/volumes" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.347175 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" event={"ID":"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd","Type":"ContainerStarted","Data":"3d65e9575f65a399a4f8415e38d835e5c7973eebe97c3322818f8376a268c985"} Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.347249 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" event={"ID":"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd","Type":"ContainerStarted","Data":"ff01553fed282844a7bb9f68b1c30560ae8b700b77316253f03a651df3c1fc6e"} Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.347485 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.406261 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" podStartSLOduration=60.406232865 podStartE2EDuration="1m0.406232865s" podCreationTimestamp="2026-02-18 00:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:12:49.405781023 +0000 UTC m=+252.920238778" watchObservedRunningTime="2026-02-18 00:12:49.406232865 +0000 UTC m=+252.920690600" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.452360 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.584835 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.784572 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.216538 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.308909 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.347849 5121 patch_prober.go:28] interesting pod/oauth-openshift-5598d4f74c-wh9tq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded" start-of-body= Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.348001 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" podUID="4aa710d2-ba83-4fc7-ac7f-ed51869a02bd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.453957 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.528943 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.641009 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:51 crc kubenswrapper[5121]: I0218 00:12:51.292273 5121 ???:1] "http: TLS handshake error from 192.168.126.11:46500: no serving certificate available for the kubelet" Feb 18 00:12:58 crc kubenswrapper[5121]: I0218 00:12:58.064580 5121 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:12:58 crc kubenswrapper[5121]: I0218 00:12:58.065858 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513" gracePeriod=5 Feb 18 00:13:00 crc kubenswrapper[5121]: I0218 00:13:00.287985 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.450027 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.450089 5121 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513" exitCode=137 Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.661951 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.662099 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.739789 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.739955 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740041 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740057 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740150 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740213 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740273 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740393 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740531 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740939 5121 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740957 5121 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740966 5121 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740976 5121 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.750698 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.842064 5121 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.460062 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.460270 5121 scope.go:117] "RemoveContainer" containerID="cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.460310 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.545360 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.545471 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.280249 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.280973 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.297466 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.297556 5121 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5bbb88bd-16dc-4dd3-aec8-8aac7cffee69" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.321465 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.321575 5121 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5bbb88bd-16dc-4dd3-aec8-8aac7cffee69" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.658796 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 18 00:13:07 crc kubenswrapper[5121]: I0218 00:13:07.701422 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 18 00:13:07 crc kubenswrapper[5121]: I0218 00:13:07.860130 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:13:09 crc kubenswrapper[5121]: I0218 00:13:09.501989 5121 generic.go:358] "Generic (PLEG): container finished" podID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" exitCode=0 Feb 18 00:13:09 crc kubenswrapper[5121]: I0218 00:13:09.502063 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerDied","Data":"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b"} Feb 18 00:13:09 crc kubenswrapper[5121]: I0218 00:13:09.503019 5121 scope.go:117] "RemoveContainer" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:13:10 crc kubenswrapper[5121]: I0218 00:13:10.145111 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.603944 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.607993 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerStarted","Data":"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514"} Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.609104 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.610512 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:13:12 crc kubenswrapper[5121]: I0218 00:13:12.191238 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 18 00:13:12 crc kubenswrapper[5121]: I0218 00:13:12.808574 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.791902 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.792825 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" containerID="cri-o://4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" gracePeriod=30 Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.797724 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.798132 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" containerID="cri-o://9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" gracePeriod=30 Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.250264 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.293275 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294784 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294809 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294862 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294870 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.295213 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.295240 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.301464 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.341953 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.343206 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378486 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378539 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378614 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378685 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378712 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378758 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379374 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379491 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config" (OuterVolumeSpecName: "config") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379597 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379991 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp" (OuterVolumeSpecName: "tmp") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380177 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380287 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380412 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380474 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380543 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380678 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380704 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380720 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380724 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.381066 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382222 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382260 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382272 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382284 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.386151 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p" (OuterVolumeSpecName: "kube-api-access-p4t5p") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "kube-api-access-p4t5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.386813 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.390233 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.392796 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483411 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483535 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483611 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483640 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483720 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483828 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484499 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484732 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484809 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484872 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484982 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485159 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485009 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config" (OuterVolumeSpecName: "config") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485354 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485416 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485492 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485615 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485696 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485801 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485822 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485841 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485865 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.486390 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.486528 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp" (OuterVolumeSpecName: "tmp") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.486765 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.487168 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.487481 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.488120 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89" (OuterVolumeSpecName: "kube-api-access-gcc89") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "kube-api-access-gcc89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.491022 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.491976 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.506046 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.591618 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.592521 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593069 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593455 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593761 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594059 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594218 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594355 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594107 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593835 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.597807 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.601299 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.613491 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640433 5121 generic.go:358] "Generic (PLEG): container finished" podID="cc530ba0-1249-4787-8584-22f866581116" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" exitCode=0 Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640513 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerDied","Data":"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640551 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerDied","Data":"8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640558 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640575 5121 scope.go:117] "RemoveContainer" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.642929 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643357 5121 generic.go:358] "Generic (PLEG): container finished" podID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" exitCode=0 Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643481 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerDied","Data":"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643762 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerDied","Data":"b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.674143 5121 scope.go:117] "RemoveContainer" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" Feb 18 00:13:15 crc kubenswrapper[5121]: E0218 00:13:15.675220 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79\": container with ID starting with 9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79 not found: ID does not exist" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.675263 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79"} err="failed to get container status \"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79\": rpc error: code = NotFound desc = could not find container \"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79\": container with ID starting with 9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79 not found: ID does not exist" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.675290 5121 scope.go:117] "RemoveContainer" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.687158 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.699616 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.704510 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.709291 5121 scope.go:117] "RemoveContainer" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" Feb 18 00:13:15 crc kubenswrapper[5121]: E0218 00:13:15.709919 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6\": container with ID starting with 4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6 not found: ID does not exist" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.709970 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6"} err="failed to get container status \"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6\": rpc error: code = NotFound desc = could not find container \"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6\": container with ID starting with 4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6 not found: ID does not exist" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.715192 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.722176 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.923204 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.945310 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:13:15 crc kubenswrapper[5121]: W0218 00:13:15.949939 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1792aaaf_7683_495e_9fab_d35daee8eac0.slice/crio-2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc WatchSource:0}: Error finding container 2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc: Status 404 returned error can't find the container with id 2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.659033 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerStarted","Data":"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.659590 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerStarted","Data":"2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.660047 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.664447 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerStarted","Data":"05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.664534 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerStarted","Data":"01518e01f94c0717f14956ba308198eb334de1750e195936b1a5d46a78a8b446"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.664795 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.695219 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.699028 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" podStartSLOduration=1.698991292 podStartE2EDuration="1.698991292s" podCreationTimestamp="2026-02-18 00:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:13:16.686277556 +0000 UTC m=+280.200735321" watchObservedRunningTime="2026-02-18 00:13:16.698991292 +0000 UTC m=+280.213449097" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.724368 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" podStartSLOduration=1.724334544 podStartE2EDuration="1.724334544s" podCreationTimestamp="2026-02-18 00:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:13:16.71636222 +0000 UTC m=+280.230819985" watchObservedRunningTime="2026-02-18 00:13:16.724334544 +0000 UTC m=+280.238792349" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.796594 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:17 crc kubenswrapper[5121]: I0218 00:13:17.282094 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc530ba0-1249-4787-8584-22f866581116" path="/var/lib/kubelet/pods/cc530ba0-1249-4787-8584-22f866581116/volumes" Feb 18 00:13:17 crc kubenswrapper[5121]: I0218 00:13:17.283930 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" path="/var/lib/kubelet/pods/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69/volumes" Feb 18 00:13:17 crc kubenswrapper[5121]: I0218 00:13:17.354040 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 18 00:13:18 crc kubenswrapper[5121]: I0218 00:13:18.791028 5121 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:13:21 crc kubenswrapper[5121]: I0218 00:13:21.764825 5121 ???:1] "http: TLS handshake error from 192.168.126.11:50076: no serving certificate available for the kubelet" Feb 18 00:13:24 crc kubenswrapper[5121]: I0218 00:13:24.633461 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 18 00:13:25 crc kubenswrapper[5121]: I0218 00:13:25.011145 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 18 00:13:28 crc kubenswrapper[5121]: I0218 00:13:28.528290 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.545064 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.546142 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.546230 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.547326 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.547438 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97" gracePeriod=600 Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.838075 5121 generic.go:358] "Generic (PLEG): container finished" podID="4000e83d-77d2-4372-93a4-5dbb22251239" containerID="c763fd6dfa3e272df9c90c9104d067c6998b90e0c16d5d9f5c113fd96ac3d234" exitCode=0 Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.838805 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerDied","Data":"c763fd6dfa3e272df9c90c9104d067c6998b90e0c16d5d9f5c113fd96ac3d234"} Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.843794 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97" exitCode=0 Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.843916 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97"} Feb 18 00:13:35 crc kubenswrapper[5121]: I0218 00:13:35.856282 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9"} Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.245391 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.352550 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"4000e83d-77d2-4372-93a4-5dbb22251239\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.353044 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"4000e83d-77d2-4372-93a4-5dbb22251239\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.354104 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca" (OuterVolumeSpecName: "serviceca") pod "4000e83d-77d2-4372-93a4-5dbb22251239" (UID: "4000e83d-77d2-4372-93a4-5dbb22251239"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.364187 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp" (OuterVolumeSpecName: "kube-api-access-9nvwp") pod "4000e83d-77d2-4372-93a4-5dbb22251239" (UID: "4000e83d-77d2-4372-93a4-5dbb22251239"). InnerVolumeSpecName "kube-api-access-9nvwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.454379 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.454425 5121 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.869099 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.870480 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerDied","Data":"3687564e37fbbf3ead5e98e35201f7bb38d703cba012611a2342fb57cfe0c5c0"} Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.870564 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3687564e37fbbf3ead5e98e35201f7bb38d703cba012611a2342fb57cfe0c5c0" Feb 18 00:13:37 crc kubenswrapper[5121]: I0218 00:13:37.441360 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:13:37 crc kubenswrapper[5121]: I0218 00:13:37.442070 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:14:02 crc kubenswrapper[5121]: I0218 00:14:02.896824 5121 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:14:14 crc kubenswrapper[5121]: I0218 00:14:14.812093 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:14:14 crc kubenswrapper[5121]: I0218 00:14:14.813148 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" containerID="cri-o://05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55" gracePeriod=30 Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.141024 5121 generic.go:358] "Generic (PLEG): container finished" podID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerID="05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55" exitCode=0 Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.141214 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerDied","Data":"05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55"} Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.292584 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.327793 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55787dc5fc-68vkf"] Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.328602 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4000e83d-77d2-4372-93a4-5dbb22251239" containerName="image-pruner" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.330631 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4000e83d-77d2-4372-93a4-5dbb22251239" containerName="image-pruner" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.330756 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.330865 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.331037 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4000e83d-77d2-4372-93a4-5dbb22251239" containerName="image-pruner" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.331114 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.338015 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.341699 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55787dc5fc-68vkf"] Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.419332 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.419387 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.419421 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420306 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420372 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420432 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420558 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-proxy-ca-bundles\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420594 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-client-ca\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420630 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn745\" (UniqueName: \"kubernetes.io/projected/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-kube-api-access-hn745\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420715 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-serving-cert\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420764 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-config\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420829 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-tmp\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.421532 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.421579 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config" (OuterVolumeSpecName: "config") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.421721 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca" (OuterVolumeSpecName: "client-ca") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.422215 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp" (OuterVolumeSpecName: "tmp") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.428610 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.428607 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4" (OuterVolumeSpecName: "kube-api-access-dw9g4") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "kube-api-access-dw9g4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522413 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-client-ca\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hn745\" (UniqueName: \"kubernetes.io/projected/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-kube-api-access-hn745\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522728 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-serving-cert\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522945 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-config\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-tmp\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523086 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-proxy-ca-bundles\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523153 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523173 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523193 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523210 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523231 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523247 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523989 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-tmp\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.524548 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-client-ca\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.524839 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-proxy-ca-bundles\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.525186 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-config\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.526982 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-serving-cert\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.554277 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn745\" (UniqueName: \"kubernetes.io/projected/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-kube-api-access-hn745\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.657809 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.965413 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55787dc5fc-68vkf"] Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.981423 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.149830 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerDied","Data":"01518e01f94c0717f14956ba308198eb334de1750e195936b1a5d46a78a8b446"} Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.149933 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.150287 5121 scope.go:117] "RemoveContainer" containerID="05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55" Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.151897 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" event={"ID":"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0","Type":"ContainerStarted","Data":"2bab1c8ca9c8e135a847b31f6d8c10a1833ffa9ec6748b93b2bbcfb587c8987e"} Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.195013 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.199763 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.166804 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" event={"ID":"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0","Type":"ContainerStarted","Data":"18ed351560fac002a026b73bf851cf642629b2a043f88f986437752b73a13e53"} Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.167223 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.175302 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.200182 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" podStartSLOduration=3.199836165 podStartE2EDuration="3.199836165s" podCreationTimestamp="2026-02-18 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:17.196907508 +0000 UTC m=+340.711365353" watchObservedRunningTime="2026-02-18 00:14:17.199836165 +0000 UTC m=+340.714293910" Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.287556 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" path="/var/lib/kubelet/pods/93f589c8-9d36-4f32-99ff-de8809c4d470/volumes" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.562882 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.566347 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rdts" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" containerID="cri-o://c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.569541 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.569991 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttn8q" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" containerID="cri-o://1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.596259 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.596547 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" containerID="cri-o://d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.618201 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.618514 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q4gm2" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" containerID="cri-o://6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.632187 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.632542 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvff2" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" containerID="cri-o://2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.643023 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kdn9c"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.660983 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kdn9c"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.661187 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.703619 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pvff2" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" probeResult="failure" output="" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757045 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-tmp\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757111 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7m4\" (UniqueName: \"kubernetes.io/projected/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-kube-api-access-tx7m4\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757147 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757180 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.858798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-tmp\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.858899 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7m4\" (UniqueName: \"kubernetes.io/projected/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-kube-api-access-tx7m4\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.858935 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.859046 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.860917 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-tmp\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.861108 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.875954 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.879832 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7m4\" (UniqueName: \"kubernetes.io/projected/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-kube-api-access-tx7m4\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.957793 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.966960 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.063625 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"40bc3a2a-4cd6-44f6-beca-0193584836a9\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.063787 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"40bc3a2a-4cd6-44f6-beca-0193584836a9\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.063850 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"40bc3a2a-4cd6-44f6-beca-0193584836a9\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.075418 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities" (OuterVolumeSpecName: "utilities") pod "40bc3a2a-4cd6-44f6-beca-0193584836a9" (UID: "40bc3a2a-4cd6-44f6-beca-0193584836a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.084827 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm" (OuterVolumeSpecName: "kube-api-access-tddwm") pod "40bc3a2a-4cd6-44f6-beca-0193584836a9" (UID: "40bc3a2a-4cd6-44f6-beca-0193584836a9"). InnerVolumeSpecName "kube-api-access-tddwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.127381 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40bc3a2a-4cd6-44f6-beca-0193584836a9" (UID: "40bc3a2a-4cd6-44f6-beca-0193584836a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.162641 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.165593 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.165617 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.165629 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.173449 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.203021 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.256429 5121 generic.go:358] "Generic (PLEG): container finished" podID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerID="2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.256679 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.267103 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.267225 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"787ee824-3e40-4929-9eda-a58528843d28\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.281821 5121 generic.go:358] "Generic (PLEG): container finished" podID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282021 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282460 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282515 5121 scope.go:117] "RemoveContainer" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289087 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"787ee824-3e40-4929-9eda-a58528843d28\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289141 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"787ee824-3e40-4929-9eda-a58528843d28\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289208 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289231 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.293766 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities" (OuterVolumeSpecName: "utilities") pod "787ee824-3e40-4929-9eda-a58528843d28" (UID: "787ee824-3e40-4929-9eda-a58528843d28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.294866 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg" (OuterVolumeSpecName: "kube-api-access-5h9tg") pod "6854ad9b-1632-47d4-82bc-bdd90768bc2a" (UID: "6854ad9b-1632-47d4-82bc-bdd90768bc2a"). InnerVolumeSpecName "kube-api-access-5h9tg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.293842 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities" (OuterVolumeSpecName: "utilities") pod "6854ad9b-1632-47d4-82bc-bdd90768bc2a" (UID: "6854ad9b-1632-47d4-82bc-bdd90768bc2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.298123 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "787ee824-3e40-4929-9eda-a58528843d28" (UID: "787ee824-3e40-4929-9eda-a58528843d28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.303400 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq" (OuterVolumeSpecName: "kube-api-access-5cldq") pod "787ee824-3e40-4929-9eda-a58528843d28" (UID: "787ee824-3e40-4929-9eda-a58528843d28"). InnerVolumeSpecName "kube-api-access-5cldq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.303942 5121 generic.go:358] "Generic (PLEG): container finished" podID="787ee824-3e40-4929-9eda-a58528843d28" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.303986 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.304031 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.304223 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314602 5121 generic.go:358] "Generic (PLEG): container finished" podID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314715 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerDied","Data":"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314746 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerDied","Data":"a35c1a8554f97c336c169b9b7ab07394eb161632ed304015d160d6c0a71bba70"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314803 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314888 5121 scope.go:117] "RemoveContainer" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.317461 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.322907 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6854ad9b-1632-47d4-82bc-bdd90768bc2a" (UID: "6854ad9b-1632-47d4-82bc-bdd90768bc2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324113 5121 generic.go:358] "Generic (PLEG): container finished" podID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324232 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324288 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"b7ed7dc670ad2dcb9f8640d5f44b830e13e4f0554ae87aa8ba2653124a6f77c7"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324675 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.348239 5121 scope.go:117] "RemoveContainer" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.382085 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kdn9c"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.388706 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390021 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390092 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390160 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390230 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390472 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390482 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390494 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390505 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390516 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390524 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.391671 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.391914 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp" (OuterVolumeSpecName: "tmp") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.396818 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.397002 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj" (OuterVolumeSpecName: "kube-api-access-nk5bj") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "kube-api-access-nk5bj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.397671 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.398523 5121 scope.go:117] "RemoveContainer" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.398935 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc\": container with ID starting with 1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc not found: ID does not exist" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.398995 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc"} err="failed to get container status \"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc\": rpc error: code = NotFound desc = could not find container \"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc\": container with ID starting with 1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.399031 5121 scope.go:117] "RemoveContainer" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.399490 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72\": container with ID starting with 7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72 not found: ID does not exist" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.399527 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72"} err="failed to get container status \"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72\": rpc error: code = NotFound desc = could not find container \"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72\": container with ID starting with 7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.399566 5121 scope.go:117] "RemoveContainer" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.400380 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a\": container with ID starting with cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a not found: ID does not exist" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.400408 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a"} err="failed to get container status \"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a\": rpc error: code = NotFound desc = could not find container \"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a\": container with ID starting with cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.400424 5121 scope.go:117] "RemoveContainer" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.404694 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.408039 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.415055 5121 scope.go:117] "RemoveContainer" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.443843 5121 scope.go:117] "RemoveContainer" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.478223 5121 scope.go:117] "RemoveContainer" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.478748 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014\": container with ID starting with 6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014 not found: ID does not exist" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.478796 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014"} err="failed to get container status \"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014\": rpc error: code = NotFound desc = could not find container \"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014\": container with ID starting with 6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.478825 5121 scope.go:117] "RemoveContainer" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.479206 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5\": container with ID starting with be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5 not found: ID does not exist" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.479338 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5"} err="failed to get container status \"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5\": rpc error: code = NotFound desc = could not find container \"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5\": container with ID starting with be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.479424 5121 scope.go:117] "RemoveContainer" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.479934 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a\": container with ID starting with 8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a not found: ID does not exist" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.479979 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a"} err="failed to get container status \"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a\": rpc error: code = NotFound desc = could not find container \"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a\": container with ID starting with 8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.480001 5121 scope.go:117] "RemoveContainer" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491439 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"55ab02de-5c10-4bc3-b031-3205a22662ae\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491481 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"55ab02de-5c10-4bc3-b031-3205a22662ae\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491542 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"55ab02de-5c10-4bc3-b031-3205a22662ae\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491865 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491886 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491933 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491946 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.493575 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities" (OuterVolumeSpecName: "utilities") pod "55ab02de-5c10-4bc3-b031-3205a22662ae" (UID: "55ab02de-5c10-4bc3-b031-3205a22662ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.495998 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv" (OuterVolumeSpecName: "kube-api-access-xs2gv") pod "55ab02de-5c10-4bc3-b031-3205a22662ae" (UID: "55ab02de-5c10-4bc3-b031-3205a22662ae"). InnerVolumeSpecName "kube-api-access-xs2gv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.510953 5121 scope.go:117] "RemoveContainer" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.542121 5121 scope.go:117] "RemoveContainer" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.547305 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514\": container with ID starting with d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514 not found: ID does not exist" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547355 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514"} err="failed to get container status \"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514\": rpc error: code = NotFound desc = could not find container \"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514\": container with ID starting with d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547388 5121 scope.go:117] "RemoveContainer" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.547866 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b\": container with ID starting with caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b not found: ID does not exist" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547888 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b"} err="failed to get container status \"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b\": rpc error: code = NotFound desc = could not find container \"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b\": container with ID starting with caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547903 5121 scope.go:117] "RemoveContainer" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.568767 5121 scope.go:117] "RemoveContainer" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.589238 5121 scope.go:117] "RemoveContainer" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.593055 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.593081 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.611125 5121 scope.go:117] "RemoveContainer" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.611638 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90\": container with ID starting with c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90 not found: ID does not exist" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.611693 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90"} err="failed to get container status \"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90\": rpc error: code = NotFound desc = could not find container \"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90\": container with ID starting with c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.611719 5121 scope.go:117] "RemoveContainer" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.612288 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d\": container with ID starting with a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d not found: ID does not exist" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.612314 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d"} err="failed to get container status \"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d\": rpc error: code = NotFound desc = could not find container \"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d\": container with ID starting with a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.612329 5121 scope.go:117] "RemoveContainer" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.612906 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181\": container with ID starting with bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181 not found: ID does not exist" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.612943 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181"} err="failed to get container status \"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181\": rpc error: code = NotFound desc = could not find container \"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181\": container with ID starting with bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.622527 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.629401 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55ab02de-5c10-4bc3-b031-3205a22662ae" (UID: "55ab02de-5c10-4bc3-b031-3205a22662ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.629843 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.660396 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.665120 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.694600 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.277847 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" path="/var/lib/kubelet/pods/40bc3a2a-4cd6-44f6-beca-0193584836a9/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.279265 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" path="/var/lib/kubelet/pods/6854ad9b-1632-47d4-82bc-bdd90768bc2a/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.280259 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787ee824-3e40-4929-9eda-a58528843d28" path="/var/lib/kubelet/pods/787ee824-3e40-4929-9eda-a58528843d28/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.281629 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" path="/var/lib/kubelet/pods/cad52ef7-8080-48a2-91e3-5bcfc007b196/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.343721 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"2acd9157a5c0303ad67f67ca0941df951cb9a99c9745a061c1e6e8e477768d5b"} Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.343827 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.343836 5121 scope.go:117] "RemoveContainer" containerID="2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.347660 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" event={"ID":"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2","Type":"ContainerStarted","Data":"9e9ffecf2797fccc41ed4577f7462c1445330d0871b7d9d5b2303f9065e35753"} Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.347734 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" event={"ID":"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2","Type":"ContainerStarted","Data":"fb87fd6d607724ad9bf74ce3f7b633577bd6b6872463224d187dc33c7ece7778"} Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.347867 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.352905 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.368914 5121 scope.go:117] "RemoveContainer" containerID="3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.377321 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" podStartSLOduration=2.37730219 podStartE2EDuration="2.37730219s" podCreationTimestamp="2026-02-18 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:25.369507235 +0000 UTC m=+348.883965000" watchObservedRunningTime="2026-02-18 00:14:25.37730219 +0000 UTC m=+348.891759935" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.391064 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.408046 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.411228 5121 scope.go:117] "RemoveContainer" containerID="9dab05515e6db77b43d60e41519ec993edf909177c201915f71ceb9b10cf035c" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.783572 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784844 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784879 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784899 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784909 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784928 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784939 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784952 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784981 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784998 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785010 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785022 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785031 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785043 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785051 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785074 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785083 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785100 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785109 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785122 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785133 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786583 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786614 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786670 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786681 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786702 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786711 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786880 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786908 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786933 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786949 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786966 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.787158 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.787183 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.787347 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.798834 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.799032 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.802808 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.918223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.918271 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.918297 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.975519 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m24xj"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.991093 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m24xj"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.991254 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.993934 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019751 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019826 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019884 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7pc\" (UniqueName: \"kubernetes.io/projected/17b15350-ab27-4821-bfb5-2ca12b36c32d-kube-api-access-6w7pc\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019905 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-catalog-content\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019933 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-utilities\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.020448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.020614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.047607 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.114445 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.121536 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7pc\" (UniqueName: \"kubernetes.io/projected/17b15350-ab27-4821-bfb5-2ca12b36c32d-kube-api-access-6w7pc\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.122179 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-catalog-content\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.122245 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-utilities\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.123378 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-utilities\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.123454 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-catalog-content\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.145872 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7pc\" (UniqueName: \"kubernetes.io/projected/17b15350-ab27-4821-bfb5-2ca12b36c32d-kube-api-access-6w7pc\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.314496 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.608575 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:14:26 crc kubenswrapper[5121]: W0218 00:14:26.619027 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9e0e10c_e462_4d05_9e54_25f1527555c1.slice/crio-02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1 WatchSource:0}: Error finding container 02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1: Status 404 returned error can't find the container with id 02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1 Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.734532 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m24xj"] Feb 18 00:14:26 crc kubenswrapper[5121]: W0218 00:14:26.744177 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b15350_ab27_4821_bfb5_2ca12b36c32d.slice/crio-a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2 WatchSource:0}: Error finding container a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2: Status 404 returned error can't find the container with id a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2 Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.282835 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" path="/var/lib/kubelet/pods/55ab02de-5c10-4bc3-b031-3205a22662ae/volumes" Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.380096 5121 generic.go:358] "Generic (PLEG): container finished" podID="17b15350-ab27-4821-bfb5-2ca12b36c32d" containerID="695efa8716fbb1382b6430d1e3b3351427f8a2c793baf206a4a0b5bb40681ddf" exitCode=0 Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.380219 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerDied","Data":"695efa8716fbb1382b6430d1e3b3351427f8a2c793baf206a4a0b5bb40681ddf"} Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.380278 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerStarted","Data":"a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2"} Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.384237 5121 generic.go:358] "Generic (PLEG): container finished" podID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" exitCode=0 Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.385122 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18"} Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.385155 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerStarted","Data":"02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1"} Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.177424 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hnxm"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.196709 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hnxm"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.196900 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.200118 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.269851 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-utilities\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.269934 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-catalog-content\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.270099 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wz6\" (UniqueName: \"kubernetes.io/projected/b3bb7195-d543-4fba-bbe3-661b888f6ab3-kube-api-access-28wz6\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.371258 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28wz6\" (UniqueName: \"kubernetes.io/projected/b3bb7195-d543-4fba-bbe3-661b888f6ab3-kube-api-access-28wz6\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.371316 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-utilities\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.371339 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-catalog-content\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.372205 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-utilities\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.373167 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-catalog-content\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.379457 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-svl96"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.385543 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.388457 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.390507 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svl96"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.393401 5121 generic.go:358] "Generic (PLEG): container finished" podID="17b15350-ab27-4821-bfb5-2ca12b36c32d" containerID="b0535d96b50b19d29da4e46480762c9457882317b00bf2b0fb09a9a21a955cdf" exitCode=0 Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.393494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerDied","Data":"b0535d96b50b19d29da4e46480762c9457882317b00bf2b0fb09a9a21a955cdf"} Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.395745 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wz6\" (UniqueName: \"kubernetes.io/projected/b3bb7195-d543-4fba-bbe3-661b888f6ab3-kube-api-access-28wz6\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.405117 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1"} Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.405314 5121 generic.go:358] "Generic (PLEG): container finished" podID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" exitCode=0 Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.477617 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-catalog-content\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.477736 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-utilities\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.477762 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbghz\" (UniqueName: \"kubernetes.io/projected/7f3e3949-ddb8-4d79-8063-8e319147d2b5-kube-api-access-cbghz\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.512738 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.522547 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-hrxzn"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.538715 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.556638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-hrxzn"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579401 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-tls\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579857 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-bound-sa-token\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579898 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntn5l\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-kube-api-access-ntn5l\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579946 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579967 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-utilities\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579991 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39a55eed-2143-45b6-854a-67ea1f2842d9-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580010 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbghz\" (UniqueName: \"kubernetes.io/projected/7f3e3949-ddb8-4d79-8063-8e319147d2b5-kube-api-access-cbghz\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580053 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39a55eed-2143-45b6-854a-67ea1f2842d9-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580089 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-catalog-content\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580119 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-trusted-ca\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580137 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-certificates\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580869 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-utilities\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.587198 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-catalog-content\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.616341 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.617539 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbghz\" (UniqueName: \"kubernetes.io/projected/7f3e3949-ddb8-4d79-8063-8e319147d2b5-kube-api-access-cbghz\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681549 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-tls\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681596 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-bound-sa-token\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681615 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntn5l\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-kube-api-access-ntn5l\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681676 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39a55eed-2143-45b6-854a-67ea1f2842d9-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39a55eed-2143-45b6-854a-67ea1f2842d9-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681757 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-trusted-ca\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681775 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-certificates\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.683075 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39a55eed-2143-45b6-854a-67ea1f2842d9-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.684435 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-trusted-ca\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.686890 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-tls\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.688141 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39a55eed-2143-45b6-854a-67ea1f2842d9-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.696340 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-certificates\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.706262 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-bound-sa-token\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.707318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntn5l\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-kube-api-access-ntn5l\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.754986 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.891086 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.999188 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hnxm"] Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.199494 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svl96"] Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.340854 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-hrxzn"] Feb 18 00:14:29 crc kubenswrapper[5121]: W0218 00:14:29.344693 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a55eed_2143_45b6_854a_67ea1f2842d9.slice/crio-efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351 WatchSource:0}: Error finding container efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351: Status 404 returned error can't find the container with id efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351 Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.423293 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerStarted","Data":"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.444861 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" event={"ID":"39a55eed-2143-45b6-854a-67ea1f2842d9","Type":"ContainerStarted","Data":"efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.451260 5121 generic.go:358] "Generic (PLEG): container finished" podID="7f3e3949-ddb8-4d79-8063-8e319147d2b5" containerID="152449ca31ab356a7b6f003f28252a7246f5c4bbc0beba6ae0a09d44123d9b19" exitCode=0 Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.451411 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerDied","Data":"152449ca31ab356a7b6f003f28252a7246f5c4bbc0beba6ae0a09d44123d9b19"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.451493 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerStarted","Data":"feb3a607f3f56e77975f785301b061cc596d0468819c14a6c8b04a2169eba85f"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.462714 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3bb7195-d543-4fba-bbe3-661b888f6ab3" containerID="7d8e7c7522a172304434cadf1bd36d87f8c6ccabefa90563bf1f0309846201b7" exitCode=0 Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.462975 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerDied","Data":"7d8e7c7522a172304434cadf1bd36d87f8c6ccabefa90563bf1f0309846201b7"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.463025 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerStarted","Data":"61deae93ec3b56fc3f8a17bd5230306fff8989c7fe8f2d357bd4be4f5ec383a2"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.474707 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerStarted","Data":"c9239c6c862695cfb680c9192ed0c93fd102bcc5c40085f9d8a062351dc2186e"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.485380 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9knfx" podStartSLOduration=3.900433393 podStartE2EDuration="4.485351715s" podCreationTimestamp="2026-02-18 00:14:25 +0000 UTC" firstStartedPulling="2026-02-18 00:14:27.387405325 +0000 UTC m=+350.901863060" lastFinishedPulling="2026-02-18 00:14:27.972323647 +0000 UTC m=+351.486781382" observedRunningTime="2026-02-18 00:14:29.453711224 +0000 UTC m=+352.968168989" watchObservedRunningTime="2026-02-18 00:14:29.485351715 +0000 UTC m=+352.999809580" Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.514013 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m24xj" podStartSLOduration=3.9478544749999998 podStartE2EDuration="4.513986916s" podCreationTimestamp="2026-02-18 00:14:25 +0000 UTC" firstStartedPulling="2026-02-18 00:14:27.381240643 +0000 UTC m=+350.895698388" lastFinishedPulling="2026-02-18 00:14:27.947373094 +0000 UTC m=+351.461830829" observedRunningTime="2026-02-18 00:14:29.507766602 +0000 UTC m=+353.022224347" watchObservedRunningTime="2026-02-18 00:14:29.513986916 +0000 UTC m=+353.028444641" Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.480767 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" event={"ID":"39a55eed-2143-45b6-854a-67ea1f2842d9","Type":"ContainerStarted","Data":"a917b28c3ac6918f67d939637a1892a665b55c12db5d3815ce542162ad2ab7fd"} Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.481105 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.483348 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerStarted","Data":"418ed9fe8facd0443d8e7be89975eaefdac7e602715557b40982aba116c03011"} Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.486515 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerStarted","Data":"8935e9a6a7dd157b879fbccb4cab3defec881d667e5fdaacaf50d1f351228c93"} Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.508066 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" podStartSLOduration=2.50804974 podStartE2EDuration="2.50804974s" podCreationTimestamp="2026-02-18 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:30.507796783 +0000 UTC m=+354.022254518" watchObservedRunningTime="2026-02-18 00:14:30.50804974 +0000 UTC m=+354.022507485" Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.495548 5121 generic.go:358] "Generic (PLEG): container finished" podID="7f3e3949-ddb8-4d79-8063-8e319147d2b5" containerID="418ed9fe8facd0443d8e7be89975eaefdac7e602715557b40982aba116c03011" exitCode=0 Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.495634 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerDied","Data":"418ed9fe8facd0443d8e7be89975eaefdac7e602715557b40982aba116c03011"} Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.498986 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3bb7195-d543-4fba-bbe3-661b888f6ab3" containerID="8935e9a6a7dd157b879fbccb4cab3defec881d667e5fdaacaf50d1f351228c93" exitCode=0 Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.499105 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerDied","Data":"8935e9a6a7dd157b879fbccb4cab3defec881d667e5fdaacaf50d1f351228c93"} Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.508263 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerStarted","Data":"219a660440b2bf82c64723910432192df9680da2eb2959df9cac5ae85ce60327"} Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.511303 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerStarted","Data":"d1347696d0690c5d4142655da2be5d681d16ad46135cad36344552f2a69ca6ef"} Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.530717 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-svl96" podStartSLOduration=3.84664176 podStartE2EDuration="4.530693574s" podCreationTimestamp="2026-02-18 00:14:28 +0000 UTC" firstStartedPulling="2026-02-18 00:14:29.453090698 +0000 UTC m=+352.967548443" lastFinishedPulling="2026-02-18 00:14:30.137142532 +0000 UTC m=+353.651600257" observedRunningTime="2026-02-18 00:14:32.524501352 +0000 UTC m=+356.038959087" watchObservedRunningTime="2026-02-18 00:14:32.530693574 +0000 UTC m=+356.045151329" Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.552547 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hnxm" podStartSLOduration=3.898689767 podStartE2EDuration="4.552524027s" podCreationTimestamp="2026-02-18 00:14:28 +0000 UTC" firstStartedPulling="2026-02-18 00:14:29.464149529 +0000 UTC m=+352.978607254" lastFinishedPulling="2026-02-18 00:14:30.117983769 +0000 UTC m=+353.632441514" observedRunningTime="2026-02-18 00:14:32.548698626 +0000 UTC m=+356.063156371" watchObservedRunningTime="2026-02-18 00:14:32.552524027 +0000 UTC m=+356.066981782" Feb 18 00:14:34 crc kubenswrapper[5121]: I0218 00:14:34.803067 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:14:34 crc kubenswrapper[5121]: I0218 00:14:34.803835 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" containerID="cri-o://dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" gracePeriod=30 Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.326137 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.355330 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.355995 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.356015 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.356122 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.367686 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.379474 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395273 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395324 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395362 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395398 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395569 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-serving-cert\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395903 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-client-ca\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395923 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp" (OuterVolumeSpecName: "tmp") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395974 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-tmp\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396023 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-config\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396044 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzzf\" (UniqueName: \"kubernetes.io/projected/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-kube-api-access-frzzf\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396081 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396348 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config" (OuterVolumeSpecName: "config") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396612 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca" (OuterVolumeSpecName: "client-ca") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.406821 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh" (OuterVolumeSpecName: "kube-api-access-5nngh") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "kube-api-access-5nngh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.407544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497329 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-config\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497390 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frzzf\" (UniqueName: \"kubernetes.io/projected/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-kube-api-access-frzzf\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497452 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-serving-cert\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-client-ca\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497518 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-tmp\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497560 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497572 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497585 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497595 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.498198 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-tmp\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.498850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-client-ca\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.498925 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-config\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.502394 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-serving-cert\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.516535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frzzf\" (UniqueName: \"kubernetes.io/projected/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-kube-api-access-frzzf\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544222 5121 generic.go:358] "Generic (PLEG): container finished" podID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" exitCode=0 Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544462 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerDied","Data":"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e"} Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerDied","Data":"2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc"} Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544524 5121 scope.go:117] "RemoveContainer" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544789 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.575329 5121 scope.go:117] "RemoveContainer" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" Feb 18 00:14:35 crc kubenswrapper[5121]: E0218 00:14:35.577400 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e\": container with ID starting with dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e not found: ID does not exist" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.577485 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e"} err="failed to get container status \"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e\": rpc error: code = NotFound desc = could not find container \"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e\": container with ID starting with dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e not found: ID does not exist" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.587849 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.592319 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.688873 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.115145 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.115193 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.123721 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z"] Feb 18 00:14:36 crc kubenswrapper[5121]: W0218 00:14:36.134768 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeac52ad9_59fe_4424_9cc6_bfe2d4cd1144.slice/crio-065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956 WatchSource:0}: Error finding container 065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956: Status 404 returned error can't find the container with id 065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956 Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.176246 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.315731 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.315801 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.375132 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.552844 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" event={"ID":"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144","Type":"ContainerStarted","Data":"6b11c5c10374c655462753d61a0a2c359f0c9ebc0780b630186446a76633286b"} Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.553694 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.553800 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" event={"ID":"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144","Type":"ContainerStarted","Data":"065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956"} Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.574267 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" podStartSLOduration=2.5742408770000003 podStartE2EDuration="2.574240877s" podCreationTimestamp="2026-02-18 00:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:36.569228005 +0000 UTC m=+360.083685740" watchObservedRunningTime="2026-02-18 00:14:36.574240877 +0000 UTC m=+360.088698622" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.607793 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.614895 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.999346 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:37 crc kubenswrapper[5121]: I0218 00:14:37.280178 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" path="/var/lib/kubelet/pods/1792aaaf-7683-495e-9fab-d35daee8eac0/volumes" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.513713 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.514419 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.560185 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.615392 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.756405 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.756537 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.809924 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:39 crc kubenswrapper[5121]: I0218 00:14:39.643714 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:51 crc kubenswrapper[5121]: I0218 00:14:51.506980 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:51 crc kubenswrapper[5121]: I0218 00:14:51.573100 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.159886 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz"] Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.175596 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz"] Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.175854 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.181493 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.181838 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.334618 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.335060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.335261 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.437383 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.438020 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.438415 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.440187 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.450879 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.467995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.552831 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.051766 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz"] Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.732249 5121 generic.go:358] "Generic (PLEG): container finished" podID="a4615074-d315-44d4-99e1-61ad71c1e230" containerID="326a0ee967a9b64ba24c4fe4634a35ca941d8d900a768c26a5a4e78f169bba26" exitCode=0 Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.732380 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" event={"ID":"a4615074-d315-44d4-99e1-61ad71c1e230","Type":"ContainerDied","Data":"326a0ee967a9b64ba24c4fe4634a35ca941d8d900a768c26a5a4e78f169bba26"} Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.732845 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" event={"ID":"a4615074-d315-44d4-99e1-61ad71c1e230","Type":"ContainerStarted","Data":"860fcbddc2838abec242e036f52b9da1edb2cef24994c66a3b8170eb8ca8aa43"} Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.145132 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.287159 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"a4615074-d315-44d4-99e1-61ad71c1e230\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.287518 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"a4615074-d315-44d4-99e1-61ad71c1e230\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.287617 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"a4615074-d315-44d4-99e1-61ad71c1e230\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.288326 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4615074-d315-44d4-99e1-61ad71c1e230" (UID: "a4615074-d315-44d4-99e1-61ad71c1e230"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.295275 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l" (OuterVolumeSpecName: "kube-api-access-28c8l") pod "a4615074-d315-44d4-99e1-61ad71c1e230" (UID: "a4615074-d315-44d4-99e1-61ad71c1e230"). InnerVolumeSpecName "kube-api-access-28c8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.299855 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4615074-d315-44d4-99e1-61ad71c1e230" (UID: "a4615074-d315-44d4-99e1-61ad71c1e230"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.389230 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.389288 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.389307 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.751349 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" event={"ID":"a4615074-d315-44d4-99e1-61ad71c1e230","Type":"ContainerDied","Data":"860fcbddc2838abec242e036f52b9da1edb2cef24994c66a3b8170eb8ca8aa43"} Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.752102 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="860fcbddc2838abec242e036f52b9da1edb2cef24994c66a3b8170eb8ca8aa43" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.751415 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:16 crc kubenswrapper[5121]: I0218 00:15:16.617992 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" containerID="cri-o://3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521" gracePeriod=30 Feb 18 00:15:16 crc kubenswrapper[5121]: I0218 00:15:16.870697 5121 generic.go:358] "Generic (PLEG): container finished" podID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerID="3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521" exitCode=0 Feb 18 00:15:16 crc kubenswrapper[5121]: I0218 00:15:16.870776 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerDied","Data":"3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521"} Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.100317 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.221972 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222033 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222236 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222493 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222517 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.223278 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.223371 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.224500 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.224944 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.231459 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.231496 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh" (OuterVolumeSpecName: "kube-api-access-phphh") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "kube-api-access-phphh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.232029 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.233876 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.241983 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.254208 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326154 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326218 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326241 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326259 5121 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326283 5121 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326300 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326318 5121 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.881502 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.881499 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerDied","Data":"51cf34af5f3e60547305a8dcaaf837202c7932c821c7bc1d4c4374385f24b01a"} Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.881700 5121 scope.go:117] "RemoveContainer" containerID="3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.905369 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.914499 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:15:19 crc kubenswrapper[5121]: I0218 00:15:19.278788 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" path="/var/lib/kubelet/pods/7147ca0c-09b0-4078-8e66-4d589f54c85a/volumes" Feb 18 00:15:34 crc kubenswrapper[5121]: I0218 00:15:34.545427 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:15:34 crc kubenswrapper[5121]: I0218 00:15:34.546372 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.147132 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149176 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149228 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149278 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4615074-d315-44d4-99e1-61ad71c1e230" containerName="collect-profiles" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149296 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4615074-d315-44d4-99e1-61ad71c1e230" containerName="collect-profiles" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149522 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4615074-d315-44d4-99e1-61ad71c1e230" containerName="collect-profiles" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149561 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.200794 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.200883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.211523 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.211944 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.212152 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.225706 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"auto-csr-approver-29522896-wgmcl\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.327566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"auto-csr-approver-29522896-wgmcl\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.372894 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"auto-csr-approver-29522896-wgmcl\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.527139 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.974633 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:16:01 crc kubenswrapper[5121]: I0218 00:16:01.198028 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerStarted","Data":"fd358c72531020d4e2aa5a6742bae362193df5455e9f796d5de49fb1bf73cb45"} Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.231757 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerStarted","Data":"07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7"} Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.258042 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" podStartSLOduration=1.433283509 podStartE2EDuration="4.258015634s" podCreationTimestamp="2026-02-18 00:16:00 +0000 UTC" firstStartedPulling="2026-02-18 00:16:00.991947341 +0000 UTC m=+444.506405106" lastFinishedPulling="2026-02-18 00:16:03.816679496 +0000 UTC m=+447.331137231" observedRunningTime="2026-02-18 00:16:04.249326835 +0000 UTC m=+447.763784610" watchObservedRunningTime="2026-02-18 00:16:04.258015634 +0000 UTC m=+447.772473399" Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.545054 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.545231 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.684206 5121 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-gkz29" Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.712983 5121 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-gkz29" Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.248974 5121 generic.go:358] "Generic (PLEG): container finished" podID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerID="07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7" exitCode=0 Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.249194 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerDied","Data":"07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7"} Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.715321 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-20 00:11:04 +0000 UTC" deadline="2026-03-14 10:26:08.483489612 +0000 UTC" Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.715474 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="586h10m2.76802121s" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.632031 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.716338 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-20 00:11:04 +0000 UTC" deadline="2026-03-14 18:16:31.392275886 +0000 UTC" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.716383 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="594h0m24.675896667s" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.721089 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.732357 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg" (OuterVolumeSpecName: "kube-api-access-6kfwg") pod "17bd0236-52ea-4369-9891-8cf9e1dcff2b" (UID: "17bd0236-52ea-4369-9891-8cf9e1dcff2b"). InnerVolumeSpecName "kube-api-access-6kfwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.822736 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") on node \"crc\" DevicePath \"\"" Feb 18 00:16:07 crc kubenswrapper[5121]: I0218 00:16:07.265543 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerDied","Data":"fd358c72531020d4e2aa5a6742bae362193df5455e9f796d5de49fb1bf73cb45"} Feb 18 00:16:07 crc kubenswrapper[5121]: I0218 00:16:07.265615 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd358c72531020d4e2aa5a6742bae362193df5455e9f796d5de49fb1bf73cb45" Feb 18 00:16:07 crc kubenswrapper[5121]: I0218 00:16:07.265815 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.544602 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.546001 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.546075 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.546973 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.547062 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9" gracePeriod=600 Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.475895 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9" exitCode=0 Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.476026 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9"} Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.476780 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44"} Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.476823 5121 scope.go:117] "RemoveContainer" containerID="f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.148149 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.149933 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerName="oc" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.149964 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerName="oc" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.150176 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerName="oc" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.207447 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.207578 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.210242 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.210585 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.210589 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.267800 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"auto-csr-approver-29522898-b8lhd\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.368710 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"auto-csr-approver-29522898-b8lhd\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.397519 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"auto-csr-approver-29522898-b8lhd\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.529536 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:01 crc kubenswrapper[5121]: I0218 00:18:01.037371 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:18:01 crc kubenswrapper[5121]: I0218 00:18:01.080508 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" event={"ID":"0752b905-c20c-4af0-a716-b5297e9ed6fc","Type":"ContainerStarted","Data":"0876be5be9269e988c96245f0476e8e24748abae87f360427db8bf7e2f6d0df5"} Feb 18 00:18:03 crc kubenswrapper[5121]: I0218 00:18:03.096260 5121 generic.go:358] "Generic (PLEG): container finished" podID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerID="6df8e5d37ed8641c59178b1b8167978f4db2c4f4c7a2d5703ab6d4d5d7849eea" exitCode=0 Feb 18 00:18:03 crc kubenswrapper[5121]: I0218 00:18:03.096357 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" event={"ID":"0752b905-c20c-4af0-a716-b5297e9ed6fc","Type":"ContainerDied","Data":"6df8e5d37ed8641c59178b1b8167978f4db2c4f4c7a2d5703ab6d4d5d7849eea"} Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.391307 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.535354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"0752b905-c20c-4af0-a716-b5297e9ed6fc\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.542916 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl" (OuterVolumeSpecName: "kube-api-access-trmsl") pod "0752b905-c20c-4af0-a716-b5297e9ed6fc" (UID: "0752b905-c20c-4af0-a716-b5297e9ed6fc"). InnerVolumeSpecName "kube-api-access-trmsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.637131 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:05 crc kubenswrapper[5121]: I0218 00:18:05.110325 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" event={"ID":"0752b905-c20c-4af0-a716-b5297e9ed6fc","Type":"ContainerDied","Data":"0876be5be9269e988c96245f0476e8e24748abae87f360427db8bf7e2f6d0df5"} Feb 18 00:18:05 crc kubenswrapper[5121]: I0218 00:18:05.110623 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0876be5be9269e988c96245f0476e8e24748abae87f360427db8bf7e2f6d0df5" Feb 18 00:18:05 crc kubenswrapper[5121]: I0218 00:18:05.110376 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:34 crc kubenswrapper[5121]: I0218 00:18:34.545516 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:18:34 crc kubenswrapper[5121]: I0218 00:18:34.546224 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:18:37 crc kubenswrapper[5121]: I0218 00:18:37.543989 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:18:37 crc kubenswrapper[5121]: I0218 00:18:37.544957 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.549756 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.550626 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" containerID="cri-o://74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.550742 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" containerID="cri-o://07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.709900 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7tprw"] Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710583 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" containerID="cri-o://28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710676 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" containerID="cri-o://7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710725 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" containerID="cri-o://96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710739 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" containerID="cri-o://74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710789 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710777 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" containerID="cri-o://a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710729 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" containerID="cri-o://d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.751015 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" containerID="cri-o://79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.806992 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.847071 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986"] Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848716 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848747 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848763 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerName="oc" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848772 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerName="oc" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848796 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848803 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848930 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerName="oc" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848944 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848957 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.855261 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.929847 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.930000 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.930030 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.930150 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932022 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932054 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932138 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxcmt\" (UniqueName: \"kubernetes.io/projected/703db6d8-e584-4bdc-ad21-8a159643b2cf-kube-api-access-cxcmt\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932266 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932297 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932377 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932395 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.938396 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r" (OuterVolumeSpecName: "kube-api-access-rmw8r") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "kube-api-access-rmw8r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.938874 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033697 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxcmt\" (UniqueName: \"kubernetes.io/projected/703db6d8-e584-4bdc-ad21-8a159643b2cf-kube-api-access-cxcmt\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033864 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033942 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033995 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034153 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034189 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034823 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034931 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.039066 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.052841 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxcmt\" (UniqueName: \"kubernetes.io/projected/703db6d8-e584-4bdc-ad21-8a159643b2cf-kube-api-access-cxcmt\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.080129 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-acl-logging/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.080596 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-controller/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.081238 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134767 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134851 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134886 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134936 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134946 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134963 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135098 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135127 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135151 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135176 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135211 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135286 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135312 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135369 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135403 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135442 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135473 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135528 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135562 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135591 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135630 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135980 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket" (OuterVolumeSpecName: "log-socket") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136075 5121 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136096 5121 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136128 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136156 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136142 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136220 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136502 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136593 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash" (OuterVolumeSpecName: "host-slash") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136614 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136639 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136674 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136692 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136711 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log" (OuterVolumeSpecName: "node-log") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136735 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136776 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.137004 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.138435 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.139558 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l" (OuterVolumeSpecName: "kube-api-access-xfl5l") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "kube-api-access-xfl5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.144358 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zvj44"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145282 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145318 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145341 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145352 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145389 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kubecfg-setup" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145401 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kubecfg-setup" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145416 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145428 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145441 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145453 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145469 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145480 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145497 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145508 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145524 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145535 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145558 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145570 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145738 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145763 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145776 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145789 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145807 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145825 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145846 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145861 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.147127 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.164092 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.187927 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: W0218 00:18:57.216799 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod703db6d8_e584_4bdc_ad21_8a159643b2cf.slice/crio-e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9 WatchSource:0}: Error finding container e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9: Status 404 returned error can't find the container with id e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238091 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-script-lib\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238250 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238292 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-systemd-units\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238326 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-env-overrides\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238486 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e11ae91-1d70-4646-8a77-13e95651cf36-ovn-node-metrics-cert\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238559 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-ovn\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238632 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-log-socket\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238714 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-netd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239229 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-etc-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239301 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-netns\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239353 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-systemd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239395 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-kubelet\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239438 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-config\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239509 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpkb\" (UniqueName: \"kubernetes.io/projected/0e11ae91-1d70-4646-8a77-13e95651cf36-kube-api-access-jqpkb\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239582 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-slash\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239625 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-var-lib-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239758 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-node-log\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239803 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-bin\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239878 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240027 5121 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240046 5121 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240070 5121 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240079 5121 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240089 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240096 5121 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240107 5121 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240116 5121 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240125 5121 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240149 5121 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240159 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240167 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240175 5121 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240183 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240193 5121 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240203 5121 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240211 5121 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240220 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341324 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqpkb\" (UniqueName: \"kubernetes.io/projected/0e11ae91-1d70-4646-8a77-13e95651cf36-kube-api-access-jqpkb\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341378 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-slash\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341401 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-var-lib-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341418 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-node-log\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341433 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-bin\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341456 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341481 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-script-lib\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341509 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-var-lib-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341552 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-systemd-units\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341527 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-systemd-units\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341579 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-env-overrides\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341601 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-node-log\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341597 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-bin\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341660 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341626 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341604 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e11ae91-1d70-4646-8a77-13e95651cf36-ovn-node-metrics-cert\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341741 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-ovn\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-log-socket\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-netd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341926 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342022 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-etc-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342006 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-log-socket\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342114 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-etc-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342122 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341582 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-slash\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-netns\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342182 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-netns\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342064 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-netd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342353 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-systemd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342407 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-ovn\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342482 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-kubelet\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342523 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-kubelet\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342455 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-systemd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342575 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-script-lib\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342581 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-config\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342976 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-env-overrides\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.343410 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-config\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.344728 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e11ae91-1d70-4646-8a77-13e95651cf36-ovn-node-metrics-cert\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.366460 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqpkb\" (UniqueName: \"kubernetes.io/projected/0e11ae91-1d70-4646-8a77-13e95651cf36-kube-api-access-jqpkb\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.491232 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.521822 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.521903 5121 generic.go:358] "Generic (PLEG): container finished" podID="51dcc4ed-63a2-4a92-936e-8ef22eca20d6" containerID="5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299" exitCode=2 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.521969 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerDied","Data":"5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.524187 5121 scope.go:117] "RemoveContainer" containerID="5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.532815 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-acl-logging/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.534632 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-controller/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536636 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536688 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536704 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536715 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536725 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536735 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536746 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" exitCode=143 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536735 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536814 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536853 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536861 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536885 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536917 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536948 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536974 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536995 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537010 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537031 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537053 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537070 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537087 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537101 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537116 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537130 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537146 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537159 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537176 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537225 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537242 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537256 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537270 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537283 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537297 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537311 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537325 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537340 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537371 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536757 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" exitCode=143 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537639 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537719 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537739 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537755 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537772 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537787 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537802 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537818 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537833 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537847 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546168 5121 generic.go:358] "Generic (PLEG): container finished" podID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerID="07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546205 5121 generic.go:358] "Generic (PLEG): container finished" podID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerID="74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546311 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546693 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerDied","Data":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546789 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546873 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546940 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerDied","Data":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546965 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546977 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.547035 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerDied","Data":"3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.547047 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.547059 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.563459 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" event={"ID":"703db6d8-e584-4bdc-ad21-8a159643b2cf","Type":"ContainerStarted","Data":"e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.580864 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7tprw"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.581423 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.585400 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7tprw"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.610813 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.610860 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.611004 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.638239 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.660347 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.678822 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.814295 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.836500 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.870600 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.899876 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.900439 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.900467 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.900488 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.900958 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.900982 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901000 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.901252 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901280 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901291 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.901530 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901576 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901593 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.901963 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901981 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901993 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.902461 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.902483 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.902497 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.902994 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903011 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903023 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.903615 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903636 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903669 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.904516 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.904582 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.904623 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.905996 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.906034 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.906694 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.906730 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907081 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907107 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907571 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907601 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908035 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908069 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908472 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908500 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.921774 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.921831 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922396 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922484 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922878 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922921 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923171 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923193 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923429 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923448 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923639 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923716 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923918 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923935 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924133 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924150 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924377 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924402 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.925747 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.925786 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.926467 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.926523 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927009 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927036 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927327 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927364 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927698 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927726 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927963 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927989 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928216 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928255 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928444 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928465 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928734 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928760 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928981 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928999 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929200 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929216 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929470 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929491 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930026 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930075 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930358 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930386 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930754 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930822 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931073 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931094 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931315 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931354 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931568 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.575137 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e11ae91-1d70-4646-8a77-13e95651cf36" containerID="f924400afe73fc2ebb7c7d384ff314b8d1c82b7210d0334263517b991dc5d61b" exitCode=0 Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.575421 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerDied","Data":"f924400afe73fc2ebb7c7d384ff314b8d1c82b7210d0334263517b991dc5d61b"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.575459 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"f27f501182976119f1afa288a92ebcc5f23a554452521ac4737ee868c17ac686"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.584566 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" event={"ID":"703db6d8-e584-4bdc-ad21-8a159643b2cf","Type":"ContainerStarted","Data":"1fb339aa8ef13951f91b65e9b4bd719830b36c604b32f5452a201fe333fb18d8"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.584723 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" event={"ID":"703db6d8-e584-4bdc-ad21-8a159643b2cf","Type":"ContainerStarted","Data":"6f5fc4bafc017b43c626b2bed1107d2648ceef1f468ab846a2503e4c89c23a77"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.591214 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.591419 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerStarted","Data":"325ca769f8b12afd18cac46fed98d6343a14a622a72b47e474f86387625e75d2"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.667302 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" podStartSLOduration=2.6672693929999998 podStartE2EDuration="2.667269393s" podCreationTimestamp="2026-02-18 00:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:18:58.654360694 +0000 UTC m=+622.168818519" watchObservedRunningTime="2026-02-18 00:18:58.667269393 +0000 UTC m=+622.181727178" Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.279487 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" path="/var/lib/kubelet/pods/0ec6f87b-86e0-4893-9709-9dc7381bc95a/volumes" Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.280640 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" path="/var/lib/kubelet/pods/aa9cd074-60f6-4754-9ef8-567f9274e384/volumes" Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.603836 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"7c19e98acf0cb7662d53e9c18b19bb018020832f148536f78f00ab76b113b2ce"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604166 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"6d01a9a7e5cc0ab6888ee37f85cd72b3a540781bb3e80d442dd34d6790703929"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604184 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"31227a932831b4dffcb475ad97571a9a55c275a87a2e61268c08fcb0125a89bb"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"d38d6e7946f39b34b8d755d85b6d3020fdfa433b371f8478185c6bc94d40b354"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604210 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"05cef983d9f92e2521302b5c44899045032387b00bd6df89cb5d8d49897a2dfb"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604221 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"82f5dae6c4143c9534bc285bfd8c2c7da0dcd1723a78273943179cf67afec0bf"} Feb 18 00:19:02 crc kubenswrapper[5121]: I0218 00:19:02.631283 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"269a4106dc7d73614339bd6b8c2e9277c5c88b77aa292962acec0cefdd9c8bc3"} Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.545221 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.545594 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.655197 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"6327744c97df40c767a9dc8867f99340483329f333dcd2ef9579fdc6bc67d69a"} Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.656370 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.656416 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.656430 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.701562 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" podStartSLOduration=7.7015418 podStartE2EDuration="7.7015418s" podCreationTimestamp="2026-02-18 00:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:19:04.698734424 +0000 UTC m=+628.213192179" watchObservedRunningTime="2026-02-18 00:19:04.7015418 +0000 UTC m=+628.215999535" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.705162 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.722054 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.545287 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.545810 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.545860 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.546530 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.546604 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44" gracePeriod=600 Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.742423 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.884821 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44" exitCode=0 Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.884936 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44"} Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.885373 5121 scope.go:117] "RemoveContainer" containerID="71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9" Feb 18 00:19:35 crc kubenswrapper[5121]: I0218 00:19:35.894706 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2"} Feb 18 00:19:36 crc kubenswrapper[5121]: I0218 00:19:36.693407 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:37 crc kubenswrapper[5121]: I0218 00:19:37.696611 5121 scope.go:117] "RemoveContainer" containerID="07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5" Feb 18 00:19:37 crc kubenswrapper[5121]: I0218 00:19:37.730059 5121 scope.go:117] "RemoveContainer" containerID="74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.149679 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.161155 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.161383 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.164319 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.164637 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.164721 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.287913 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"auto-csr-approver-29522900-85n6k\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.389199 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"auto-csr-approver-29522900-85n6k\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.425186 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"auto-csr-approver-29522900-85n6k\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.503030 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.767524 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:20:00 crc kubenswrapper[5121]: W0218 00:20:00.778943 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d8c4383_cf7d_4c99_badf_42f433b91870.slice/crio-14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db WatchSource:0}: Error finding container 14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db: Status 404 returned error can't find the container with id 14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db Feb 18 00:20:01 crc kubenswrapper[5121]: I0218 00:20:01.075338 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522900-85n6k" event={"ID":"6d8c4383-cf7d-4c99-badf-42f433b91870","Type":"ContainerStarted","Data":"14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db"} Feb 18 00:20:03 crc kubenswrapper[5121]: I0218 00:20:03.094561 5121 generic.go:358] "Generic (PLEG): container finished" podID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerID="2772c03a3bd634ef4a9b0f93f7a4ca54d3598f6d92857ea841fed48a41f5f618" exitCode=0 Feb 18 00:20:03 crc kubenswrapper[5121]: I0218 00:20:03.094641 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522900-85n6k" event={"ID":"6d8c4383-cf7d-4c99-badf-42f433b91870","Type":"ContainerDied","Data":"2772c03a3bd634ef4a9b0f93f7a4ca54d3598f6d92857ea841fed48a41f5f618"} Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.439640 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.549140 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"6d8c4383-cf7d-4c99-badf-42f433b91870\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.555258 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb" (OuterVolumeSpecName: "kube-api-access-2v6hb") pod "6d8c4383-cf7d-4c99-badf-42f433b91870" (UID: "6d8c4383-cf7d-4c99-badf-42f433b91870"). InnerVolumeSpecName "kube-api-access-2v6hb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.651847 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:05 crc kubenswrapper[5121]: I0218 00:20:05.111560 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:05 crc kubenswrapper[5121]: I0218 00:20:05.111574 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522900-85n6k" event={"ID":"6d8c4383-cf7d-4c99-badf-42f433b91870","Type":"ContainerDied","Data":"14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db"} Feb 18 00:20:05 crc kubenswrapper[5121]: I0218 00:20:05.111992 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.073312 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.074039 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9knfx" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" containerID="cri-o://7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" gracePeriod=30 Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.504287 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.579923 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"c9e0e10c-e462-4d05-9e54-25f1527555c1\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.580681 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"c9e0e10c-e462-4d05-9e54-25f1527555c1\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.580971 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"c9e0e10c-e462-4d05-9e54-25f1527555c1\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.582010 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities" (OuterVolumeSpecName: "utilities") pod "c9e0e10c-e462-4d05-9e54-25f1527555c1" (UID: "c9e0e10c-e462-4d05-9e54-25f1527555c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.586463 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg" (OuterVolumeSpecName: "kube-api-access-vzcbg") pod "c9e0e10c-e462-4d05-9e54-25f1527555c1" (UID: "c9e0e10c-e462-4d05-9e54-25f1527555c1"). InnerVolumeSpecName "kube-api-access-vzcbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.600480 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9e0e10c-e462-4d05-9e54-25f1527555c1" (UID: "c9e0e10c-e462-4d05-9e54-25f1527555c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.682683 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.682742 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.682761 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.126882 5121 generic.go:358] "Generic (PLEG): container finished" podID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" exitCode=0 Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.126958 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.127314 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825"} Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.127373 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1"} Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.127412 5121 scope.go:117] "RemoveContainer" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.149130 5121 scope.go:117] "RemoveContainer" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.167452 5121 scope.go:117] "RemoveContainer" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.179445 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.183692 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.202873 5121 scope.go:117] "RemoveContainer" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" Feb 18 00:20:07 crc kubenswrapper[5121]: E0218 00:20:07.203326 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825\": container with ID starting with 7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825 not found: ID does not exist" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.203402 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825"} err="failed to get container status \"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825\": rpc error: code = NotFound desc = could not find container \"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825\": container with ID starting with 7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825 not found: ID does not exist" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.203481 5121 scope.go:117] "RemoveContainer" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" Feb 18 00:20:07 crc kubenswrapper[5121]: E0218 00:20:07.204091 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1\": container with ID starting with 186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1 not found: ID does not exist" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.204172 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1"} err="failed to get container status \"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1\": rpc error: code = NotFound desc = could not find container \"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1\": container with ID starting with 186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1 not found: ID does not exist" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.204235 5121 scope.go:117] "RemoveContainer" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" Feb 18 00:20:07 crc kubenswrapper[5121]: E0218 00:20:07.204483 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18\": container with ID starting with 69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18 not found: ID does not exist" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.204556 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18"} err="failed to get container status \"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18\": rpc error: code = NotFound desc = could not find container \"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18\": container with ID starting with 69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18 not found: ID does not exist" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.279237 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" path="/var/lib/kubelet/pods/c9e0e10c-e462-4d05-9e54-25f1527555c1/volumes" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.751379 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s"] Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753167 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753203 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753236 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-content" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753251 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-content" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753307 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerName="oc" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753323 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerName="oc" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753366 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-utilities" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753379 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-utilities" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753539 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753563 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerName="oc" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.787494 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s"] Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.787773 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.793293 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.853176 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.853236 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.853279 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954206 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954301 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954354 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954900 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.972193 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:10 crc kubenswrapper[5121]: I0218 00:20:10.105688 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:10 crc kubenswrapper[5121]: I0218 00:20:10.337043 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s"] Feb 18 00:20:10 crc kubenswrapper[5121]: W0218 00:20:10.350236 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda138e59c_43ff_4154_897a_b070bedb8045.slice/crio-2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53 WatchSource:0}: Error finding container 2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53: Status 404 returned error can't find the container with id 2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53 Feb 18 00:20:11 crc kubenswrapper[5121]: I0218 00:20:11.159100 5121 generic.go:358] "Generic (PLEG): container finished" podID="a138e59c-43ff-4154-897a-b070bedb8045" containerID="9a5eee9995db7f0af9d80b87e08b08557dda91ed5b0fe76101170f0cfde01214" exitCode=0 Feb 18 00:20:11 crc kubenswrapper[5121]: I0218 00:20:11.159240 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"9a5eee9995db7f0af9d80b87e08b08557dda91ed5b0fe76101170f0cfde01214"} Feb 18 00:20:11 crc kubenswrapper[5121]: I0218 00:20:11.159420 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerStarted","Data":"2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53"} Feb 18 00:20:12 crc kubenswrapper[5121]: I0218 00:20:12.170559 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerStarted","Data":"2984588166a413887ae7b2b8448e75c7a0eea7babbce39a3e93d04d69c3d0053"} Feb 18 00:20:13 crc kubenswrapper[5121]: I0218 00:20:13.179141 5121 generic.go:358] "Generic (PLEG): container finished" podID="a138e59c-43ff-4154-897a-b070bedb8045" containerID="2984588166a413887ae7b2b8448e75c7a0eea7babbce39a3e93d04d69c3d0053" exitCode=0 Feb 18 00:20:13 crc kubenswrapper[5121]: I0218 00:20:13.179253 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"2984588166a413887ae7b2b8448e75c7a0eea7babbce39a3e93d04d69c3d0053"} Feb 18 00:20:14 crc kubenswrapper[5121]: I0218 00:20:14.191483 5121 generic.go:358] "Generic (PLEG): container finished" podID="a138e59c-43ff-4154-897a-b070bedb8045" containerID="527d085d9cc5428e5e01712b0b101d458a0f2eb267c0f9c71540bd370158259b" exitCode=0 Feb 18 00:20:14 crc kubenswrapper[5121]: I0218 00:20:14.191687 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"527d085d9cc5428e5e01712b0b101d458a0f2eb267c0f9c71540bd370158259b"} Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.507513 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.631880 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"a138e59c-43ff-4154-897a-b070bedb8045\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.631945 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"a138e59c-43ff-4154-897a-b070bedb8045\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.632198 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"a138e59c-43ff-4154-897a-b070bedb8045\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.634903 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle" (OuterVolumeSpecName: "bundle") pod "a138e59c-43ff-4154-897a-b070bedb8045" (UID: "a138e59c-43ff-4154-897a-b070bedb8045"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.639804 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p" (OuterVolumeSpecName: "kube-api-access-6l27p") pod "a138e59c-43ff-4154-897a-b070bedb8045" (UID: "a138e59c-43ff-4154-897a-b070bedb8045"). InnerVolumeSpecName "kube-api-access-6l27p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.734642 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.734745 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.899704 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util" (OuterVolumeSpecName: "util") pod "a138e59c-43ff-4154-897a-b070bedb8045" (UID: "a138e59c-43ff-4154-897a-b070bedb8045"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.937122 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.209462 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.209494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53"} Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.209555 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.754137 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959"] Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756168 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="pull" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756347 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="pull" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756459 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="util" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756553 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="util" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756683 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="extract" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756783 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="extract" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.757024 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="extract" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.769731 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959"] Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.770068 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.773339 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.849350 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.849529 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.849753 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.951315 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.951463 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.951540 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.952318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.952493 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.982617 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:17 crc kubenswrapper[5121]: I0218 00:20:17.101221 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:17 crc kubenswrapper[5121]: I0218 00:20:17.382247 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959"] Feb 18 00:20:17 crc kubenswrapper[5121]: W0218 00:20:17.389554 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod763c3704_8ae0_4b52_9eb0_2dbef76acc66.slice/crio-45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca WatchSource:0}: Error finding container 45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca: Status 404 returned error can't find the container with id 45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca Feb 18 00:20:18 crc kubenswrapper[5121]: I0218 00:20:18.232020 5121 generic.go:358] "Generic (PLEG): container finished" podID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerID="796638aaa83111f70c8b12404164778f38cb4b6acfc8bc74058fe2fb5032bfad" exitCode=0 Feb 18 00:20:18 crc kubenswrapper[5121]: I0218 00:20:18.232246 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"796638aaa83111f70c8b12404164778f38cb4b6acfc8bc74058fe2fb5032bfad"} Feb 18 00:20:18 crc kubenswrapper[5121]: I0218 00:20:18.232719 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerStarted","Data":"45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca"} Feb 18 00:20:19 crc kubenswrapper[5121]: I0218 00:20:19.241416 5121 generic.go:358] "Generic (PLEG): container finished" podID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerID="740503f0d1b4f51c669bb09f1be03c46ca2ca22ef2454b9181873fd5d8664fa1" exitCode=0 Feb 18 00:20:19 crc kubenswrapper[5121]: I0218 00:20:19.241570 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"740503f0d1b4f51c669bb09f1be03c46ca2ca22ef2454b9181873fd5d8664fa1"} Feb 18 00:20:20 crc kubenswrapper[5121]: I0218 00:20:20.249797 5121 generic.go:358] "Generic (PLEG): container finished" podID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerID="5728d0c2a69b93e313083b883cdd7419fcdd7c48fd330cdecd2170eba1e85741" exitCode=0 Feb 18 00:20:20 crc kubenswrapper[5121]: I0218 00:20:20.249857 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"5728d0c2a69b93e313083b883cdd7419fcdd7c48fd330cdecd2170eba1e85741"} Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.184896 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59"] Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.197390 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59"] Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.197549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.308830 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.308891 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.308922 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.410766 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.410887 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.410924 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.411626 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.411947 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.468567 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.521883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.590133 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.713966 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.714017 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.714054 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.714963 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle" (OuterVolumeSpecName: "bundle") pod "763c3704-8ae0-4b52-9eb0-2dbef76acc66" (UID: "763c3704-8ae0-4b52-9eb0-2dbef76acc66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.724979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk" (OuterVolumeSpecName: "kube-api-access-plwkk") pod "763c3704-8ae0-4b52-9eb0-2dbef76acc66" (UID: "763c3704-8ae0-4b52-9eb0-2dbef76acc66"). InnerVolumeSpecName "kube-api-access-plwkk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.738013 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util" (OuterVolumeSpecName: "util") pod "763c3704-8ae0-4b52-9eb0-2dbef76acc66" (UID: "763c3704-8ae0-4b52-9eb0-2dbef76acc66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.808124 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59"] Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.815061 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.815094 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.815103 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.260814 5121 generic.go:358] "Generic (PLEG): container finished" podID="73314776-9f0b-451b-a26b-15edd18cc220" containerID="8d2d78e70261a82b7fdf8e73d42b1e863dbcc4037a6ab2c099caee73e2c7adad" exitCode=0 Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.260922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"8d2d78e70261a82b7fdf8e73d42b1e863dbcc4037a6ab2c099caee73e2c7adad"} Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.261217 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerStarted","Data":"0bbe2bdb8ef749d3a2310a107c55de17180ce8b5f9c877ec97435ce50dda94ab"} Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.265130 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca"} Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.265168 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca" Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.265237 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.847986 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7"] Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850152 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="extract" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850181 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="extract" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850199 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="pull" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850205 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="pull" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850220 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="util" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850226 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="util" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850524 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="extract" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.859922 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.864247 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7"] Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.864869 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-hnzh6\"" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.865261 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.865836 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.974005 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8526\" (UniqueName: \"kubernetes.io/projected/ac0aed84-6c11-41de-9f31-3a7b2a313944-kube-api-access-g8526\") pod \"obo-prometheus-operator-9bc85b4bf-s7jq7\" (UID: \"ac0aed84-6c11-41de-9f31-3a7b2a313944\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.993632 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd"] Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.998929 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.007052 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.007166 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-c7mzw\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.016585 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.020385 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.024255 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.036079 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.074985 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8526\" (UniqueName: \"kubernetes.io/projected/ac0aed84-6c11-41de-9f31-3a7b2a313944-kube-api-access-g8526\") pod \"obo-prometheus-operator-9bc85b4bf-s7jq7\" (UID: \"ac0aed84-6c11-41de-9f31-3a7b2a313944\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.144109 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8526\" (UniqueName: \"kubernetes.io/projected/ac0aed84-6c11-41de-9f31-3a7b2a313944-kube-api-access-g8526\") pod \"obo-prometheus-operator-9bc85b4bf-s7jq7\" (UID: \"ac0aed84-6c11-41de-9f31-3a7b2a313944\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177313 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177804 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177829 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177897 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.182201 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.242514 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-p6t4z"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282577 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282684 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282711 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.304005 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.308241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.308744 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.316284 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.319988 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.335522 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.349143 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.350674 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-4fsfp\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.355567 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-p6t4z"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.362458 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.378404 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerStarted","Data":"8f323c92ddff5143e5ae7f33bc8f01cc713ac73b28d09bee3949f4ded86ad0a1"} Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.468115 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6hzks"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.486920 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxj2\" (UniqueName: \"kubernetes.io/projected/2277040f-ef0e-4742-a923-fff6ccf3e5aa-kube-api-access-5nxj2\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.486985 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2277040f-ef0e-4742-a923-fff6ccf3e5aa-observability-operator-tls\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.589911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2277040f-ef0e-4742-a923-fff6ccf3e5aa-observability-operator-tls\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.590336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5nxj2\" (UniqueName: \"kubernetes.io/projected/2277040f-ef0e-4742-a923-fff6ccf3e5aa-kube-api-access-5nxj2\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.597590 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2277040f-ef0e-4742-a923-fff6ccf3e5aa-observability-operator-tls\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.613331 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nxj2\" (UniqueName: \"kubernetes.io/projected/2277040f-ef0e-4742-a923-fff6ccf3e5aa-kube-api-access-5nxj2\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.681035 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: W0218 00:20:27.885110 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac0aed84_6c11_41de_9f31_3a7b2a313944.slice/crio-e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e WatchSource:0}: Error finding container e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e: Status 404 returned error can't find the container with id e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e Feb 18 00:20:27 crc kubenswrapper[5121]: W0218 00:20:27.990395 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2277040f_ef0e_4742_a923_fff6ccf3e5aa.slice/crio-8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db WatchSource:0}: Error finding container 8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db: Status 404 returned error can't find the container with id 8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110125 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6hzks"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110209 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110243 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110257 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-p6t4z"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110278 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110346 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.112710 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-x8vcf\"" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.306075 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdhp\" (UniqueName: \"kubernetes.io/projected/e476d06d-6937-425a-b4b9-ef90c4e141f5-kube-api-access-bgdhp\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.306583 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e476d06d-6937-425a-b4b9-ef90c4e141f5-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.388534 5121 generic.go:358] "Generic (PLEG): container finished" podID="73314776-9f0b-451b-a26b-15edd18cc220" containerID="8f323c92ddff5143e5ae7f33bc8f01cc713ac73b28d09bee3949f4ded86ad0a1" exitCode=0 Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.388609 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"8f323c92ddff5143e5ae7f33bc8f01cc713ac73b28d09bee3949f4ded86ad0a1"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.389982 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" event={"ID":"34785a14-a8e1-49c9-bcca-3996487db06f","Type":"ContainerStarted","Data":"d704ff6ba35df71fea48447f1e3530ec049fd77f8e7656ef84d2d9eef4a6ceda"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.391275 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" event={"ID":"5551a95c-fb98-465f-ba4f-3eacc393a47b","Type":"ContainerStarted","Data":"a276e1a02df3c3d24d44f86a2434fcb73240ef71abcfba38fa640c0e83f1d234"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.393019 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" event={"ID":"ac0aed84-6c11-41de-9f31-3a7b2a313944","Type":"ContainerStarted","Data":"e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.394154 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" event={"ID":"2277040f-ef0e-4742-a923-fff6ccf3e5aa","Type":"ContainerStarted","Data":"8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.408205 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdhp\" (UniqueName: \"kubernetes.io/projected/e476d06d-6937-425a-b4b9-ef90c4e141f5-kube-api-access-bgdhp\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.408461 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e476d06d-6937-425a-b4b9-ef90c4e141f5-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.409431 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e476d06d-6937-425a-b4b9-ef90c4e141f5-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.431518 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdhp\" (UniqueName: \"kubernetes.io/projected/e476d06d-6937-425a-b4b9-ef90c4e141f5-kube-api-access-bgdhp\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.433060 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.661675 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6hzks"] Feb 18 00:20:28 crc kubenswrapper[5121]: W0218 00:20:28.665276 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode476d06d_6937_425a_b4b9_ef90c4e141f5.slice/crio-46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012 WatchSource:0}: Error finding container 46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012: Status 404 returned error can't find the container with id 46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012 Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.408132 5121 generic.go:358] "Generic (PLEG): container finished" podID="73314776-9f0b-451b-a26b-15edd18cc220" containerID="f27f1f3867efe4d258e9b2bc693777b0bbc85f57be6a03a3b428c474f9f8df82" exitCode=0 Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.408207 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"f27f1f3867efe4d258e9b2bc693777b0bbc85f57be6a03a3b428c474f9f8df82"} Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.410016 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" event={"ID":"e476d06d-6937-425a-b4b9-ef90c4e141f5","Type":"ContainerStarted","Data":"46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012"} Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.841977 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-69d546b4c8-bwf25"] Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.848977 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.851466 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.851872 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.851922 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.852392 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-cdrdz\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.876095 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-69d546b4c8-bwf25"] Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.937534 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlb6\" (UniqueName: \"kubernetes.io/projected/d0abf839-f912-4864-83f7-db2da1ec1276-kube-api-access-bxlb6\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.937613 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-webhook-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.937641 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-apiservice-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.038430 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlb6\" (UniqueName: \"kubernetes.io/projected/d0abf839-f912-4864-83f7-db2da1ec1276-kube-api-access-bxlb6\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.038540 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-webhook-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.038576 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-apiservice-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.044413 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-webhook-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.044791 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-apiservice-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.084997 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlb6\" (UniqueName: \"kubernetes.io/projected/d0abf839-f912-4864-83f7-db2da1ec1276-kube-api-access-bxlb6\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.168967 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.788691 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.818156 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-69d546b4c8-bwf25"] Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.851503 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"73314776-9f0b-451b-a26b-15edd18cc220\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.851602 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"73314776-9f0b-451b-a26b-15edd18cc220\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.851720 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"73314776-9f0b-451b-a26b-15edd18cc220\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.853740 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle" (OuterVolumeSpecName: "bundle") pod "73314776-9f0b-451b-a26b-15edd18cc220" (UID: "73314776-9f0b-451b-a26b-15edd18cc220"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.868196 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z" (OuterVolumeSpecName: "kube-api-access-xgv2z") pod "73314776-9f0b-451b-a26b-15edd18cc220" (UID: "73314776-9f0b-451b-a26b-15edd18cc220"). InnerVolumeSpecName "kube-api-access-xgv2z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.876540 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util" (OuterVolumeSpecName: "util") pod "73314776-9f0b-451b-a26b-15edd18cc220" (UID: "73314776-9f0b-451b-a26b-15edd18cc220"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:30 crc kubenswrapper[5121]: W0218 00:20:30.882868 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0abf839_f912_4864_83f7_db2da1ec1276.slice/crio-23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500 WatchSource:0}: Error finding container 23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500: Status 404 returned error can't find the container with id 23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500 Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.953748 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.953784 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.953795 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.444344 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" event={"ID":"d0abf839-f912-4864-83f7-db2da1ec1276","Type":"ContainerStarted","Data":"23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500"} Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.447532 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.447538 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"0bbe2bdb8ef749d3a2310a107c55de17180ce8b5f9c877ec97435ce50dda94ab"} Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.447576 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbe2bdb8ef749d3a2310a107c55de17180ce8b5f9c877ec97435ce50dda94ab" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.499609 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf"] Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500862 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="extract" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500878 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="extract" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500892 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="util" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500899 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="util" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500915 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="pull" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500922 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="pull" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.501044 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="extract" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.507445 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.509437 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.509479 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-vr9dg\"" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.509489 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.515876 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf"] Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.552006 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" event={"ID":"d0abf839-f912-4864-83f7-db2da1ec1276","Type":"ContainerStarted","Data":"8db66948574d1cc857ae71290892f2b0782b6e317357e587729370fea9d500e6"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.554603 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" event={"ID":"2277040f-ef0e-4742-a923-fff6ccf3e5aa","Type":"ContainerStarted","Data":"1b2d36610474e6a39f98ff5ad509058ce41d4c7dfbb0bce3ed9bbec348d89c97"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.555428 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.558064 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" event={"ID":"34785a14-a8e1-49c9-bcca-3996487db06f","Type":"ContainerStarted","Data":"9ce0353ff35db1ba4632d0400602d2a7f8d71f1d011029b1fbe2dc48d06441d1"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.562059 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.562322 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" event={"ID":"5551a95c-fb98-465f-ba4f-3eacc393a47b","Type":"ContainerStarted","Data":"96b3e633c6c5c13cdaf3aeb263cf087e66281caab0da3161f74bc21ae756d8ef"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.564140 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" event={"ID":"e476d06d-6937-425a-b4b9-ef90c4e141f5","Type":"ContainerStarted","Data":"b0a2af128183350148df491d02a83b42fb7027201efea38e535a00e58acbaecd"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.564278 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.565247 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" event={"ID":"ac0aed84-6c11-41de-9f31-3a7b2a313944","Type":"ContainerStarted","Data":"dca373cb3c86dc53fa750829d288106a1eb0a086ee8fcdc81718f50c0d546240"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.577957 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" podStartSLOduration=3.009646082 podStartE2EDuration="14.577937204s" podCreationTimestamp="2026-02-18 00:20:29 +0000 UTC" firstStartedPulling="2026-02-18 00:20:30.906437756 +0000 UTC m=+714.420895481" lastFinishedPulling="2026-02-18 00:20:42.474728848 +0000 UTC m=+725.989186603" observedRunningTime="2026-02-18 00:20:43.571702444 +0000 UTC m=+727.086160239" watchObservedRunningTime="2026-02-18 00:20:43.577937204 +0000 UTC m=+727.092394939" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.601399 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" podStartSLOduration=2.791618133 podStartE2EDuration="16.60137737s" podCreationTimestamp="2026-02-18 00:20:27 +0000 UTC" firstStartedPulling="2026-02-18 00:20:28.669286239 +0000 UTC m=+712.183743974" lastFinishedPulling="2026-02-18 00:20:42.479045476 +0000 UTC m=+725.993503211" observedRunningTime="2026-02-18 00:20:43.591176283 +0000 UTC m=+727.105634018" watchObservedRunningTime="2026-02-18 00:20:43.60137737 +0000 UTC m=+727.115835135" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.603983 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.604523 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2x5\" (UniqueName: \"kubernetes.io/projected/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-kube-api-access-wt2x5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.641065 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" podStartSLOduration=3.11165487 podStartE2EDuration="17.641044428s" podCreationTimestamp="2026-02-18 00:20:26 +0000 UTC" firstStartedPulling="2026-02-18 00:20:27.943554711 +0000 UTC m=+711.458012436" lastFinishedPulling="2026-02-18 00:20:42.472944249 +0000 UTC m=+725.987401994" observedRunningTime="2026-02-18 00:20:43.638323534 +0000 UTC m=+727.152781279" watchObservedRunningTime="2026-02-18 00:20:43.641044428 +0000 UTC m=+727.155502163" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.668099 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" podStartSLOduration=3.223241769 podStartE2EDuration="17.668077642s" podCreationTimestamp="2026-02-18 00:20:26 +0000 UTC" firstStartedPulling="2026-02-18 00:20:28.058161783 +0000 UTC m=+711.572619518" lastFinishedPulling="2026-02-18 00:20:42.502997656 +0000 UTC m=+726.017455391" observedRunningTime="2026-02-18 00:20:43.662330876 +0000 UTC m=+727.176788641" watchObservedRunningTime="2026-02-18 00:20:43.668077642 +0000 UTC m=+727.182535407" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.705819 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2x5\" (UniqueName: \"kubernetes.io/projected/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-kube-api-access-wt2x5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.705911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.706318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.728834 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" podStartSLOduration=2.196204638 podStartE2EDuration="16.728805711s" podCreationTimestamp="2026-02-18 00:20:27 +0000 UTC" firstStartedPulling="2026-02-18 00:20:27.992795221 +0000 UTC m=+711.507252956" lastFinishedPulling="2026-02-18 00:20:42.525396294 +0000 UTC m=+726.039854029" observedRunningTime="2026-02-18 00:20:43.706763863 +0000 UTC m=+727.221221598" watchObservedRunningTime="2026-02-18 00:20:43.728805711 +0000 UTC m=+727.243263456" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.731937 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" podStartSLOduration=3.131782322 podStartE2EDuration="17.731923927s" podCreationTimestamp="2026-02-18 00:20:26 +0000 UTC" firstStartedPulling="2026-02-18 00:20:27.891567225 +0000 UTC m=+711.406024960" lastFinishedPulling="2026-02-18 00:20:42.49170883 +0000 UTC m=+726.006166565" observedRunningTime="2026-02-18 00:20:43.728354709 +0000 UTC m=+727.242812454" watchObservedRunningTime="2026-02-18 00:20:43.731923927 +0000 UTC m=+727.246381662" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.756580 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2x5\" (UniqueName: \"kubernetes.io/projected/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-kube-api-access-wt2x5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.823408 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:44 crc kubenswrapper[5121]: I0218 00:20:44.046978 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf"] Feb 18 00:20:44 crc kubenswrapper[5121]: I0218 00:20:44.571233 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" event={"ID":"f30b7d1c-327c-49cc-9f8e-4baf945b1e11","Type":"ContainerStarted","Data":"302ca29521be5d53821f7083bc37ae1913ae9f6ff3a07ae21b11b6cb63b8a256"} Feb 18 00:20:48 crc kubenswrapper[5121]: I0218 00:20:48.602695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" event={"ID":"f30b7d1c-327c-49cc-9f8e-4baf945b1e11","Type":"ContainerStarted","Data":"e2c621c76960c6646d92782b56d4b7b817c5acfbc21ccef71c3d3727f71862ab"} Feb 18 00:20:48 crc kubenswrapper[5121]: I0218 00:20:48.630511 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" podStartSLOduration=2.007822073 podStartE2EDuration="5.630482912s" podCreationTimestamp="2026-02-18 00:20:43 +0000 UTC" firstStartedPulling="2026-02-18 00:20:44.047782846 +0000 UTC m=+727.562240581" lastFinishedPulling="2026-02-18 00:20:47.670443685 +0000 UTC m=+731.184901420" observedRunningTime="2026-02-18 00:20:48.624686625 +0000 UTC m=+732.139144380" watchObservedRunningTime="2026-02-18 00:20:48.630482912 +0000 UTC m=+732.144940657" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.064944 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.079350 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.080066 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.093364 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.098680 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099245 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099303 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099530 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-6w67p\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099828 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.100039 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.100194 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.100355 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209768 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209828 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209865 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209895 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209924 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210075 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210160 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210187 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210232 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210251 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210391 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210429 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210457 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210490 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210510 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.312803 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313006 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313357 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313456 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313459 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313609 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313683 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313866 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313907 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313978 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314182 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314236 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314366 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314392 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314437 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314918 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315088 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315249 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315546 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315774 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.320250 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.323964 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.328378 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.328840 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.330262 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.330414 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.330917 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.405608 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.947011 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.336920 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-9qb4h"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.346487 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.349214 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.349561 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-gh7sh\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.349613 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.352291 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-9qb4h"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.435866 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqfn\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-kube-api-access-9pqfn\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.435918 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.447531 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-n4bv6"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.451578 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.454849 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-kzdgf\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.461758 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-n4bv6"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537619 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pqfn\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-kube-api-access-9pqfn\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537798 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77gn\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-kube-api-access-t77gn\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537877 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.556245 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pqfn\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-kube-api-access-9pqfn\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.557928 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.634007 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerStarted","Data":"d0bbdb692e272e5a32a127d8b6e7142c8f204f4cf6ce78888add4ccb0480e496"} Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.639114 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t77gn\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-kube-api-access-t77gn\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.639214 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.655399 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.656149 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77gn\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-kube-api-access-t77gn\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.662838 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.767058 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.859814 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-9qb4h"] Feb 18 00:20:53 crc kubenswrapper[5121]: I0218 00:20:53.151525 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-n4bv6"] Feb 18 00:20:53 crc kubenswrapper[5121]: W0218 00:20:53.165794 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod244cd2fe_9d19_45ba_9d3c_2fa6d153f27c.slice/crio-47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400 WatchSource:0}: Error finding container 47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400: Status 404 returned error can't find the container with id 47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400 Feb 18 00:20:53 crc kubenswrapper[5121]: I0218 00:20:53.645519 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" event={"ID":"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c","Type":"ContainerStarted","Data":"47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400"} Feb 18 00:20:53 crc kubenswrapper[5121]: I0218 00:20:53.646958 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" event={"ID":"b1211244-3ab3-496b-9610-d2c6d4943528","Type":"ContainerStarted","Data":"4600ae546a819a152cb51fb1aa0c33b6b59050a7d39bf6bd24a726ba24b7559f"} Feb 18 00:20:54 crc kubenswrapper[5121]: I0218 00:20:54.574580 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.728295 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerStarted","Data":"0dae15b2c696b01c1bdd0f403a7e16a2f038f646d4131a97668a3c7b655618da"} Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.730860 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" event={"ID":"b1211244-3ab3-496b-9610-d2c6d4943528","Type":"ContainerStarted","Data":"de6b554b3496d978ccc756a2b96804c70e6df38af477a1ae7dc2ecfe763ffa09"} Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.731377 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.733928 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" event={"ID":"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c","Type":"ContainerStarted","Data":"8b6507ff3afc45212015775722a1804b5760ba9a343bc8c6051e1b09249e89d9"} Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.813703 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" podStartSLOduration=2.281660903 podStartE2EDuration="14.813644809s" podCreationTimestamp="2026-02-18 00:20:52 +0000 UTC" firstStartedPulling="2026-02-18 00:20:53.167546 +0000 UTC m=+736.682003735" lastFinishedPulling="2026-02-18 00:21:05.699529896 +0000 UTC m=+749.213987641" observedRunningTime="2026-02-18 00:21:06.805356133 +0000 UTC m=+750.319813908" watchObservedRunningTime="2026-02-18 00:21:06.813644809 +0000 UTC m=+750.328102604" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.845639 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" podStartSLOduration=2.020910529 podStartE2EDuration="14.845611127s" podCreationTimestamp="2026-02-18 00:20:52 +0000 UTC" firstStartedPulling="2026-02-18 00:20:52.87487071 +0000 UTC m=+736.389328445" lastFinishedPulling="2026-02-18 00:21:05.699571268 +0000 UTC m=+749.214029043" observedRunningTime="2026-02-18 00:21:06.829995302 +0000 UTC m=+750.344453067" watchObservedRunningTime="2026-02-18 00:21:06.845611127 +0000 UTC m=+750.360068882" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.976319 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:21:07 crc kubenswrapper[5121]: I0218 00:21:07.010397 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:21:08 crc kubenswrapper[5121]: I0218 00:21:08.754602 5121 generic.go:358] "Generic (PLEG): container finished" podID="f3bc26d0-c80d-412d-9370-b821cdb7c2d7" containerID="0dae15b2c696b01c1bdd0f403a7e16a2f038f646d4131a97668a3c7b655618da" exitCode=0 Feb 18 00:21:08 crc kubenswrapper[5121]: I0218 00:21:08.755106 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerDied","Data":"0dae15b2c696b01c1bdd0f403a7e16a2f038f646d4131a97668a3c7b655618da"} Feb 18 00:21:09 crc kubenswrapper[5121]: I0218 00:21:09.767500 5121 generic.go:358] "Generic (PLEG): container finished" podID="f3bc26d0-c80d-412d-9370-b821cdb7c2d7" containerID="902e221ffd38b6e922dc7e07fcd12be3e479632041e186f2ffa8d8c89de796a1" exitCode=0 Feb 18 00:21:09 crc kubenswrapper[5121]: I0218 00:21:09.767588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerDied","Data":"902e221ffd38b6e922dc7e07fcd12be3e479632041e186f2ffa8d8c89de796a1"} Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.427892 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-mkxwj"] Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.804384 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-mkxwj"] Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.804603 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.807791 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-qrn44\"" Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.982066 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-bound-sa-token\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.982226 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9wb\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-kube-api-access-5s9wb\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.083527 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-bound-sa-token\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.084138 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5s9wb\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-kube-api-access-5s9wb\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.122061 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-bound-sa-token\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.122549 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s9wb\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-kube-api-access-5s9wb\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.134696 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.425123 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-mkxwj"] Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.748830 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.804978 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-mkxwj" event={"ID":"8dac9f2e-68b4-409b-9fd2-bfc0bd928235","Type":"ContainerStarted","Data":"906564509c0494de56e628619f53ea8b3abbe7934d74d754fd5bb9465ce9bd3b"} Feb 18 00:21:15 crc kubenswrapper[5121]: I0218 00:21:15.837221 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerStarted","Data":"7b15826d2fbec8aace9fd8e362717eceab844473f76cb2855f885ba343c8a0d2"} Feb 18 00:21:15 crc kubenswrapper[5121]: I0218 00:21:15.837550 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:21:15 crc kubenswrapper[5121]: I0218 00:21:15.891544 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=10.904233018 podStartE2EDuration="24.891520334s" podCreationTimestamp="2026-02-18 00:20:51 +0000 UTC" firstStartedPulling="2026-02-18 00:20:51.96514308 +0000 UTC m=+735.479600815" lastFinishedPulling="2026-02-18 00:21:05.952430376 +0000 UTC m=+749.466888131" observedRunningTime="2026-02-18 00:21:15.884918516 +0000 UTC m=+759.399376311" watchObservedRunningTime="2026-02-18 00:21:15.891520334 +0000 UTC m=+759.405978109" Feb 18 00:21:17 crc kubenswrapper[5121]: I0218 00:21:17.849423 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-mkxwj" event={"ID":"8dac9f2e-68b4-409b-9fd2-bfc0bd928235","Type":"ContainerStarted","Data":"064094ec321c65659fbb51a4bfd879cd6736257adc085796713be1c1756c17fe"} Feb 18 00:21:18 crc kubenswrapper[5121]: I0218 00:21:18.891005 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-mkxwj" podStartSLOduration=7.890916894 podStartE2EDuration="7.890916894s" podCreationTimestamp="2026-02-18 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:21:18.877312606 +0000 UTC m=+762.391770431" watchObservedRunningTime="2026-02-18 00:21:18.890916894 +0000 UTC m=+762.405374669" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.443265 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.453384 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.453575 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.565845 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.565959 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.566007 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.667929 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668023 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668060 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.696680 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.797272 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.039025 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.897726 5121 generic.go:358] "Generic (PLEG): container finished" podID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerID="37899606b7230e219ba1ede5dfa2904ee71acfadba88a2a7524ab839ad7954b0" exitCode=0 Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.897802 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"37899606b7230e219ba1ede5dfa2904ee71acfadba88a2a7524ab839ad7954b0"} Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.897826 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerStarted","Data":"22fc1022a88ca0ba4f6907e3d2b516afb76b997bbeee96263623281d6f180a6d"} Feb 18 00:21:25 crc kubenswrapper[5121]: I0218 00:21:25.914408 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerStarted","Data":"14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc"} Feb 18 00:21:26 crc kubenswrapper[5121]: I0218 00:21:26.925791 5121 generic.go:358] "Generic (PLEG): container finished" podID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerID="14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc" exitCode=0 Feb 18 00:21:26 crc kubenswrapper[5121]: I0218 00:21:26.925843 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc"} Feb 18 00:21:26 crc kubenswrapper[5121]: I0218 00:21:26.972723 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f3bc26d0-c80d-412d-9370-b821cdb7c2d7" containerName="elasticsearch" probeResult="failure" output=< Feb 18 00:21:26 crc kubenswrapper[5121]: {"timestamp": "2026-02-18T00:21:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 18 00:21:26 crc kubenswrapper[5121]: > Feb 18 00:21:27 crc kubenswrapper[5121]: I0218 00:21:27.934617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerStarted","Data":"2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10"} Feb 18 00:21:27 crc kubenswrapper[5121]: I0218 00:21:27.957550 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gv79" podStartSLOduration=4.236541001 podStartE2EDuration="4.957533335s" podCreationTimestamp="2026-02-18 00:21:23 +0000 UTC" firstStartedPulling="2026-02-18 00:21:24.898480084 +0000 UTC m=+768.412937819" lastFinishedPulling="2026-02-18 00:21:25.619472378 +0000 UTC m=+769.133930153" observedRunningTime="2026-02-18 00:21:27.95548727 +0000 UTC m=+771.469945005" watchObservedRunningTime="2026-02-18 00:21:27.957533335 +0000 UTC m=+771.471991080" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.122765 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.760818 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.839233 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.839329 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.852231 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.904894 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.905012 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652fn\" (UniqueName: \"kubernetes.io/projected/0f05b854-8a2a-4d4e-84e4-194616da0cd1-kube-api-access-652fn\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.905380 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007029 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007284 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007468 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-652fn\" (UniqueName: \"kubernetes.io/projected/0f05b854-8a2a-4d4e-84e4-194616da0cd1-kube-api-access-652fn\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007898 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.008575 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.033590 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-652fn\" (UniqueName: \"kubernetes.io/projected/0f05b854-8a2a-4d4e-84e4-194616da0cd1-kube-api-access-652fn\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.157672 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.620163 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.797602 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.797667 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.982668 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0f05b854-8a2a-4d4e-84e4-194616da0cd1","Type":"ContainerStarted","Data":"0a774bf0d1c1014df993d26326d2d2252f922bf686e6d3251333ff73eb48db32"} Feb 18 00:21:34 crc kubenswrapper[5121]: I0218 00:21:34.544640 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:21:34 crc kubenswrapper[5121]: I0218 00:21:34.544819 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:21:34 crc kubenswrapper[5121]: I0218 00:21:34.851893 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8gv79" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" probeResult="failure" output=< Feb 18 00:21:34 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:21:34 crc kubenswrapper[5121]: > Feb 18 00:21:41 crc kubenswrapper[5121]: I0218 00:21:41.067616 5121 generic.go:358] "Generic (PLEG): container finished" podID="0f05b854-8a2a-4d4e-84e4-194616da0cd1" containerID="aa393960edf195860bbde9c83a5696dcca572211bb64767d53b43bf4fbe26e06" exitCode=0 Feb 18 00:21:41 crc kubenswrapper[5121]: I0218 00:21:41.067824 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0f05b854-8a2a-4d4e-84e4-194616da0cd1","Type":"ContainerDied","Data":"aa393960edf195860bbde9c83a5696dcca572211bb64767d53b43bf4fbe26e06"} Feb 18 00:21:43 crc kubenswrapper[5121]: I0218 00:21:43.847116 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:43 crc kubenswrapper[5121]: I0218 00:21:43.894341 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:44 crc kubenswrapper[5121]: I0218 00:21:44.089829 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:44 crc kubenswrapper[5121]: I0218 00:21:44.094296 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0f05b854-8a2a-4d4e-84e4-194616da0cd1","Type":"ContainerStarted","Data":"04abe4d29395dc760207f73238b22eecf9a12cbe75255620b92ba4f333dde229"} Feb 18 00:21:44 crc kubenswrapper[5121]: I0218 00:21:44.122811 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=2.105460764 podStartE2EDuration="12.122785959s" podCreationTimestamp="2026-02-18 00:21:32 +0000 UTC" firstStartedPulling="2026-02-18 00:21:33.640028394 +0000 UTC m=+777.154486169" lastFinishedPulling="2026-02-18 00:21:43.657353629 +0000 UTC m=+787.171811364" observedRunningTime="2026-02-18 00:21:44.113233581 +0000 UTC m=+787.627691386" watchObservedRunningTime="2026-02-18 00:21:44.122785959 +0000 UTC m=+787.637243724" Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.104945 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8gv79" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" containerID="cri-o://2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10" gracePeriod=2 Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.943704 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm"] Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.963781 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm"] Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.963943 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.999189 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:45.999286 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:45.999325 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100144 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100227 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100274 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100986 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100984 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.112505 5121 generic.go:358] "Generic (PLEG): container finished" podID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerID="2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10" exitCode=0 Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.112596 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10"} Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.124256 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.281032 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.308864 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.403714 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"19a6950a-ef4b-4630-8fb9-700371df4f58\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.403766 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"19a6950a-ef4b-4630-8fb9-700371df4f58\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.403817 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"19a6950a-ef4b-4630-8fb9-700371df4f58\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.406957 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities" (OuterVolumeSpecName: "utilities") pod "19a6950a-ef4b-4630-8fb9-700371df4f58" (UID: "19a6950a-ef4b-4630-8fb9-700371df4f58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.414549 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p" (OuterVolumeSpecName: "kube-api-access-tb64p") pod "19a6950a-ef4b-4630-8fb9-700371df4f58" (UID: "19a6950a-ef4b-4630-8fb9-700371df4f58"). InnerVolumeSpecName "kube-api-access-tb64p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.505709 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.505756 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.515122 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19a6950a-ef4b-4630-8fb9-700371df4f58" (UID: "19a6950a-ef4b-4630-8fb9-700371df4f58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.607080 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.723052 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm"] Feb 18 00:21:46 crc kubenswrapper[5121]: W0218 00:21:46.735107 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode665d44f_e92f_4675_8b1c_f1f169d2452c.slice/crio-2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440 WatchSource:0}: Error finding container 2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440: Status 404 returned error can't find the container with id 2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440 Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.135599 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"22fc1022a88ca0ba4f6907e3d2b516afb76b997bbeee96263623281d6f180a6d"} Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.136176 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.138558 5121 scope.go:117] "RemoveContainer" containerID="2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.154592 5121 generic.go:358] "Generic (PLEG): container finished" podID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerID="9e7ce08e59311aa1181158e5f1cfa0485216113c20d6904b842bff7e33df6f9c" exitCode=0 Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.154712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"9e7ce08e59311aa1181158e5f1cfa0485216113c20d6904b842bff7e33df6f9c"} Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.154790 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerStarted","Data":"2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440"} Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.198175 5121 scope.go:117] "RemoveContainer" containerID="14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.202374 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.209146 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.224572 5121 scope.go:117] "RemoveContainer" containerID="37899606b7230e219ba1ede5dfa2904ee71acfadba88a2a7524ab839ad7954b0" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.285168 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" path="/var/lib/kubelet/pods/19a6950a-ef4b-4630-8fb9-700371df4f58/volumes" Feb 18 00:21:48 crc kubenswrapper[5121]: I0218 00:21:48.165842 5121 generic.go:358] "Generic (PLEG): container finished" podID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerID="72b07f22ea402db3bf59859afad0de3439909b8057b57cf2092c4b0d44df4a84" exitCode=0 Feb 18 00:21:48 crc kubenswrapper[5121]: I0218 00:21:48.165925 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"72b07f22ea402db3bf59859afad0de3439909b8057b57cf2092c4b0d44df4a84"} Feb 18 00:21:49 crc kubenswrapper[5121]: I0218 00:21:49.181848 5121 generic.go:358] "Generic (PLEG): container finished" podID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerID="6780c13276ae389f3e5964804021a90c86a92add5ae878279209ae2ac73bdb65" exitCode=0 Feb 18 00:21:49 crc kubenswrapper[5121]: I0218 00:21:49.182058 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"6780c13276ae389f3e5964804021a90c86a92add5ae878279209ae2ac73bdb65"} Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.552244 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.671152 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"e665d44f-e92f-4675-8b1c-f1f169d2452c\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.671239 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"e665d44f-e92f-4675-8b1c-f1f169d2452c\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.671395 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"e665d44f-e92f-4675-8b1c-f1f169d2452c\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.672854 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle" (OuterVolumeSpecName: "bundle") pod "e665d44f-e92f-4675-8b1c-f1f169d2452c" (UID: "e665d44f-e92f-4675-8b1c-f1f169d2452c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.679357 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc" (OuterVolumeSpecName: "kube-api-access-hfssc") pod "e665d44f-e92f-4675-8b1c-f1f169d2452c" (UID: "e665d44f-e92f-4675-8b1c-f1f169d2452c"). InnerVolumeSpecName "kube-api-access-hfssc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.689166 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util" (OuterVolumeSpecName: "util") pod "e665d44f-e92f-4675-8b1c-f1f169d2452c" (UID: "e665d44f-e92f-4675-8b1c-f1f169d2452c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.772724 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.772780 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.772798 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:51 crc kubenswrapper[5121]: I0218 00:21:51.224316 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440"} Feb 18 00:21:51 crc kubenswrapper[5121]: I0218 00:21:51.224408 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440" Feb 18 00:21:51 crc kubenswrapper[5121]: I0218 00:21:51.224520 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.794581 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zh9kd"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795779 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="extract" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795794 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="extract" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795809 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="util" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795816 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="util" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795825 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-utilities" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795833 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-utilities" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795846 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-content" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795853 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-content" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795863 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795870 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795888 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="pull" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795895 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="pull" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.796007 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="extract" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.796019 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.953104 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zh9kd"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.953144 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.953314 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.955681 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-966zk\"" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.958451 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.958560 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017324 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017672 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mkx\" (UniqueName: \"kubernetes.io/projected/a9bb59e6-a92e-442e-87e6-b7331ba07de6-kube-api-access-s5mkx\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017758 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017785 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9bb59e6-a92e-442e-87e6-b7331ba07de6-runner\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119120 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mkx\" (UniqueName: \"kubernetes.io/projected/a9bb59e6-a92e-442e-87e6-b7331ba07de6-kube-api-access-s5mkx\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119377 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119470 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9bb59e6-a92e-442e-87e6-b7331ba07de6-runner\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119583 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119688 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119861 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119965 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9bb59e6-a92e-442e-87e6-b7331ba07de6-runner\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.120059 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.137584 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.137597 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mkx\" (UniqueName: \"kubernetes.io/projected/a9bb59e6-a92e-442e-87e6-b7331ba07de6-kube-api-access-s5mkx\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.323193 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.330999 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.754809 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zh9kd"] Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.816832 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.296795 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" event={"ID":"a9bb59e6-a92e-442e-87e6-b7331ba07de6","Type":"ContainerStarted","Data":"18627561ca4dc622fb61ff483608e16e766edcd5c29d43faddcc6c4366b100c8"} Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.299590 5121 generic.go:358] "Generic (PLEG): container finished" podID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerID="c994ccb69aff0fe06b89699777211626107a7c2ca19cff547eaf6f6272978b7f" exitCode=0 Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.299631 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"c994ccb69aff0fe06b89699777211626107a7c2ca19cff547eaf6f6272978b7f"} Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.299690 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerStarted","Data":"cf03be90ca07127e64a0ee0a501bbfaad649d2c09d8407ea422896de154f3853"} Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.138075 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.144338 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.145910 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.183624 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.183846 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.184048 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.245420 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"auto-csr-approver-29522902-4gc7s\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.312338 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerStarted","Data":"5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3"} Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.346571 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"auto-csr-approver-29522902-4gc7s\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.376514 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"auto-csr-approver-29522902-4gc7s\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.537523 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.758017 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:22:00 crc kubenswrapper[5121]: W0218 00:22:00.760015 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode811a594_9ca7_4167_807e_e39bd75b7912.slice/crio-47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2 WatchSource:0}: Error finding container 47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2: Status 404 returned error can't find the container with id 47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2 Feb 18 00:22:01 crc kubenswrapper[5121]: I0218 00:22:01.321686 5121 generic.go:358] "Generic (PLEG): container finished" podID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerID="5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3" exitCode=0 Feb 18 00:22:01 crc kubenswrapper[5121]: I0218 00:22:01.322140 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3"} Feb 18 00:22:01 crc kubenswrapper[5121]: I0218 00:22:01.325361 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" event={"ID":"e811a594-9ca7-4167-807e-e39bd75b7912","Type":"ContainerStarted","Data":"47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2"} Feb 18 00:22:02 crc kubenswrapper[5121]: I0218 00:22:02.334944 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerStarted","Data":"ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6"} Feb 18 00:22:02 crc kubenswrapper[5121]: I0218 00:22:02.357761 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sq64t" podStartSLOduration=4.565105511 podStartE2EDuration="5.357743908s" podCreationTimestamp="2026-02-18 00:21:57 +0000 UTC" firstStartedPulling="2026-02-18 00:21:59.30039681 +0000 UTC m=+802.814854545" lastFinishedPulling="2026-02-18 00:22:00.093035207 +0000 UTC m=+803.607492942" observedRunningTime="2026-02-18 00:22:02.35303012 +0000 UTC m=+805.867487855" watchObservedRunningTime="2026-02-18 00:22:02.357743908 +0000 UTC m=+805.872201643" Feb 18 00:22:04 crc kubenswrapper[5121]: I0218 00:22:04.544942 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:22:04 crc kubenswrapper[5121]: I0218 00:22:04.545335 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.331791 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.332179 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.384575 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.445128 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:10 crc kubenswrapper[5121]: I0218 00:22:10.679858 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:22:10 crc kubenswrapper[5121]: I0218 00:22:10.680374 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sq64t" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" containerID="cri-o://ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6" gracePeriod=2 Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.419000 5121 generic.go:358] "Generic (PLEG): container finished" podID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerID="ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6" exitCode=0 Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.419176 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6"} Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.845052 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.930987 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.931059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.931106 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.938892 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75" (OuterVolumeSpecName: "kube-api-access-rpx75") pod "a0ab6087-f6f1-4788-bb13-52cf544d71ae" (UID: "a0ab6087-f6f1-4788-bb13-52cf544d71ae"). InnerVolumeSpecName "kube-api-access-rpx75". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.940062 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities" (OuterVolumeSpecName: "utilities") pod "a0ab6087-f6f1-4788-bb13-52cf544d71ae" (UID: "a0ab6087-f6f1-4788-bb13-52cf544d71ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.961919 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0ab6087-f6f1-4788-bb13-52cf544d71ae" (UID: "a0ab6087-f6f1-4788-bb13-52cf544d71ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.032549 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.032584 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.032625 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.428960 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.428969 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"cf03be90ca07127e64a0ee0a501bbfaad649d2c09d8407ea422896de154f3853"} Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.429080 5121 scope.go:117] "RemoveContainer" containerID="ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.480322 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.480574 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:22:13 crc kubenswrapper[5121]: I0218 00:22:13.282808 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" path="/var/lib/kubelet/pods/a0ab6087-f6f1-4788-bb13-52cf544d71ae/volumes" Feb 18 00:22:15 crc kubenswrapper[5121]: I0218 00:22:15.471433 5121 scope.go:117] "RemoveContainer" containerID="5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3" Feb 18 00:22:15 crc kubenswrapper[5121]: I0218 00:22:15.784761 5121 scope.go:117] "RemoveContainer" containerID="c994ccb69aff0fe06b89699777211626107a7c2ca19cff547eaf6f6272978b7f" Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.459756 5121 generic.go:358] "Generic (PLEG): container finished" podID="e811a594-9ca7-4167-807e-e39bd75b7912" containerID="b11f5a73cbf91d419fed64da70dfe6c9e158164e96434325df36174760c790eb" exitCode=0 Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.459883 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" event={"ID":"e811a594-9ca7-4167-807e-e39bd75b7912","Type":"ContainerDied","Data":"b11f5a73cbf91d419fed64da70dfe6c9e158164e96434325df36174760c790eb"} Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.461238 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" event={"ID":"a9bb59e6-a92e-442e-87e6-b7331ba07de6","Type":"ContainerStarted","Data":"b0138f1a09b39de8325c2baba5c44a3d5d29573228c24cabd31258a2b7309d15"} Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.483856 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" podStartSLOduration=2.179738211 podStartE2EDuration="19.483840867s" podCreationTimestamp="2026-02-18 00:21:57 +0000 UTC" firstStartedPulling="2026-02-18 00:21:58.763923014 +0000 UTC m=+802.278380749" lastFinishedPulling="2026-02-18 00:22:16.06802566 +0000 UTC m=+819.582483405" observedRunningTime="2026-02-18 00:22:16.483812006 +0000 UTC m=+819.998269751" watchObservedRunningTime="2026-02-18 00:22:16.483840867 +0000 UTC m=+819.998298602" Feb 18 00:22:17 crc kubenswrapper[5121]: I0218 00:22:17.800002 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:17 crc kubenswrapper[5121]: I0218 00:22:17.913428 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"e811a594-9ca7-4167-807e-e39bd75b7912\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " Feb 18 00:22:17 crc kubenswrapper[5121]: I0218 00:22:17.921314 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h" (OuterVolumeSpecName: "kube-api-access-q4k7h") pod "e811a594-9ca7-4167-807e-e39bd75b7912" (UID: "e811a594-9ca7-4167-807e-e39bd75b7912"). InnerVolumeSpecName "kube-api-access-q4k7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.015012 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.480247 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" event={"ID":"e811a594-9ca7-4167-807e-e39bd75b7912","Type":"ContainerDied","Data":"47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2"} Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.480295 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.480261 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:18 crc kubenswrapper[5121]: E0218 00:22:18.588808 5121 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode811a594_9ca7_4167_807e_e39bd75b7912.slice\": RecentStats: unable to find data in memory cache]" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.857215 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.867926 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:22:19 crc kubenswrapper[5121]: I0218 00:22:19.280859 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" path="/var/lib/kubelet/pods/17bd0236-52ea-4369-9891-8cf9e1dcff2b/volumes" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.544983 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.545473 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.545545 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.546555 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.546746 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2" gracePeriod=600 Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.635693 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2" exitCode=0 Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.635760 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2"} Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.636734 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6"} Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.636798 5121 scope.go:117] "RemoveContainer" containerID="080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.064236 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066152 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066203 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066286 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-utilities" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066305 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-utilities" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066348 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-content" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066365 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-content" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066388 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" containerName="oc" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066403 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" containerName="oc" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066688 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" containerName="oc" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.067979 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.100679 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.100911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.103252 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.227339 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.227417 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5857\" (UniqueName: \"kubernetes.io/projected/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-kube-api-access-k5857\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.227646 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.329707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.329805 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5857\" (UniqueName: \"kubernetes.io/projected/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-kube-api-access-k5857\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.329979 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.331524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.335488 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.370377 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5857\" (UniqueName: \"kubernetes.io/projected/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-kube-api-access-k5857\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.423113 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.660172 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 18 00:22:37 crc kubenswrapper[5121]: I0218 00:22:37.656977 5121 generic.go:358] "Generic (PLEG): container finished" podID="d9c883f8-94d3-4038-89dd-b6b0bf1e618a" containerID="2a43990773b68cdfac86367ff1fbc549dc09b16508ec0e914064e112cb7c1e87" exitCode=0 Feb 18 00:22:37 crc kubenswrapper[5121]: I0218 00:22:37.657053 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"d9c883f8-94d3-4038-89dd-b6b0bf1e618a","Type":"ContainerDied","Data":"2a43990773b68cdfac86367ff1fbc549dc09b16508ec0e914064e112cb7c1e87"} Feb 18 00:22:37 crc kubenswrapper[5121]: I0218 00:22:37.657497 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"d9c883f8-94d3-4038-89dd-b6b0bf1e618a","Type":"ContainerStarted","Data":"15866c64ac273331b31cb39b7ec7a65ea88c1f333734beac240806d0e1d8067b"} Feb 18 00:22:39 crc kubenswrapper[5121]: I0218 00:22:39.675470 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"d9c883f8-94d3-4038-89dd-b6b0bf1e618a","Type":"ContainerStarted","Data":"3d8f91df2725f1a4a05396bd2101aed458f2a0ac51b032506c3182a1c1b9c823"} Feb 18 00:22:39 crc kubenswrapper[5121]: I0218 00:22:39.699066 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.942598706 podStartE2EDuration="3.699041842s" podCreationTimestamp="2026-02-18 00:22:36 +0000 UTC" firstStartedPulling="2026-02-18 00:22:37.658297138 +0000 UTC m=+841.172754903" lastFinishedPulling="2026-02-18 00:22:38.414740264 +0000 UTC m=+841.929198039" observedRunningTime="2026-02-18 00:22:39.697537021 +0000 UTC m=+843.211994826" watchObservedRunningTime="2026-02-18 00:22:39.699041842 +0000 UTC m=+843.213499617" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.296417 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.306300 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.313544 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.313745 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.313933 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.322983 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.415911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.416093 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.416175 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.417004 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.417388 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.450259 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.627950 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.893447 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.899685 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.902606 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.907353 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.925391 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.925450 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.925512 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.027128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.027203 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.027249 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.029113 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.029412 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.049615 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.108185 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h"] Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.220697 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.472638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n"] Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.490297 5121 scope.go:117] "RemoveContainer" containerID="07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.697580 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerStarted","Data":"5f3dd63714433cbd47a83e6d85ba5473eb2c56d28db2c2806a35d5fd1f1d5283"} Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.697926 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerStarted","Data":"d682985a0241027f32fb57188153d2a14e5887298f5ba72e5ac0149307b31994"} Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.701191 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"22de209a085b64f1dc864f6132975da72e6815afb420eb63a158d8e0a94a63be"} Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.701080 5121 generic.go:358] "Generic (PLEG): container finished" podID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerID="22de209a085b64f1dc864f6132975da72e6815afb420eb63a158d8e0a94a63be" exitCode=0 Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.701404 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerStarted","Data":"013513444f046ec1ac230b4757401f5f665f66b53937282752ae185cd05000ce"} Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.720793 5121 generic.go:358] "Generic (PLEG): container finished" podID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerID="5f3dd63714433cbd47a83e6d85ba5473eb2c56d28db2c2806a35d5fd1f1d5283" exitCode=0 Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.721340 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"5f3dd63714433cbd47a83e6d85ba5473eb2c56d28db2c2806a35d5fd1f1d5283"} Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.728338 5121 generic.go:358] "Generic (PLEG): container finished" podID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerID="4420ce8e6a7792586bacf59ad0d263c408b38b382c0cbdf405971da2f09df69c" exitCode=0 Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.728455 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"4420ce8e6a7792586bacf59ad0d263c408b38b382c0cbdf405971da2f09df69c"} Feb 18 00:22:44 crc kubenswrapper[5121]: I0218 00:22:44.740075 5121 generic.go:358] "Generic (PLEG): container finished" podID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerID="2e10cb1dbf24d828471fa68ad727412918571d8fb49f9411586af58d4e259b57" exitCode=0 Feb 18 00:22:44 crc kubenswrapper[5121]: I0218 00:22:44.740120 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"2e10cb1dbf24d828471fa68ad727412918571d8fb49f9411586af58d4e259b57"} Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.087787 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225176 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225240 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225332 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225936 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle" (OuterVolumeSpecName: "bundle") pod "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" (UID: "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.238668 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util" (OuterVolumeSpecName: "util") pod "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" (UID: "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.239372 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx" (OuterVolumeSpecName: "kube-api-access-tmvwx") pod "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" (UID: "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2"). InnerVolumeSpecName "kube-api-access-tmvwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.326384 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.326827 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.326840 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.758640 5121 generic.go:358] "Generic (PLEG): container finished" podID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerID="cb5dc786e0f0b26b514b716554567d503b5bed0c885811d53daec6278b28f7b6" exitCode=0 Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.758876 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"cb5dc786e0f0b26b514b716554567d503b5bed0c885811d53daec6278b28f7b6"} Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.768351 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"013513444f046ec1ac230b4757401f5f665f66b53937282752ae185cd05000ce"} Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.768400 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013513444f046ec1ac230b4757401f5f665f66b53937282752ae185cd05000ce" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.768432 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:47 crc kubenswrapper[5121]: I0218 00:22:47.783053 5121 generic.go:358] "Generic (PLEG): container finished" podID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerID="56e17c7afb3cf25c3d808d2266252b6800c995d5ec17b98403cebc71b4c5f642" exitCode=0 Feb 18 00:22:47 crc kubenswrapper[5121]: I0218 00:22:47.783202 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"56e17c7afb3cf25c3d808d2266252b6800c995d5ec17b98403cebc71b4c5f642"} Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.147079 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.286073 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.286859 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.287080 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.288688 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle" (OuterVolumeSpecName: "bundle") pod "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" (UID: "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.295477 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h" (OuterVolumeSpecName: "kube-api-access-m6l8h") pod "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" (UID: "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad"). InnerVolumeSpecName "kube-api-access-m6l8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.305963 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util" (OuterVolumeSpecName: "util") pod "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" (UID: "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.389527 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.389558 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.389567 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.804095 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"d682985a0241027f32fb57188153d2a14e5887298f5ba72e5ac0149307b31994"} Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.804154 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d682985a0241027f32fb57188153d2a14e5887298f5ba72e5ac0149307b31994" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.804339 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.988555 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-mn48s"] Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989610 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989628 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989674 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989684 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989700 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989707 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989732 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989739 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989764 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989772 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989785 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989792 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989904 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989921 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.995320 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.999195 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-2wqmc\"" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.009111 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-mn48s"] Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.103970 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk8c\" (UniqueName: \"kubernetes.io/projected/7fe2ffb0-1690-49b9-a86e-88e147ec4ca6-kube-api-access-llk8c\") pod \"interconnect-operator-78b9bd8798-mn48s\" (UID: \"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.205035 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llk8c\" (UniqueName: \"kubernetes.io/projected/7fe2ffb0-1690-49b9-a86e-88e147ec4ca6-kube-api-access-llk8c\") pod \"interconnect-operator-78b9bd8798-mn48s\" (UID: \"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.230457 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llk8c\" (UniqueName: \"kubernetes.io/projected/7fe2ffb0-1690-49b9-a86e-88e147ec4ca6-kube-api-access-llk8c\") pod \"interconnect-operator-78b9bd8798-mn48s\" (UID: \"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.313430 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.554532 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-mn48s"] Feb 18 00:22:57 crc kubenswrapper[5121]: W0218 00:22:57.556391 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fe2ffb0_1690_49b9_a86e_88e147ec4ca6.slice/crio-d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce WatchSource:0}: Error finding container d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce: Status 404 returned error can't find the container with id d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.881846 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" event={"ID":"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6","Type":"ContainerStarted","Data":"d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce"} Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.228059 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-gnq9d"] Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.238833 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-gnq9d"] Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.238954 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.241966 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-fsn2j\"" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.336045 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/24352f2e-20c2-4d2e-bd18-8fb703441b7b-runner\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.336801 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbnx\" (UniqueName: \"kubernetes.io/projected/24352f2e-20c2-4d2e-bd18-8fb703441b7b-kube-api-access-mlbnx\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.437800 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbnx\" (UniqueName: \"kubernetes.io/projected/24352f2e-20c2-4d2e-bd18-8fb703441b7b-kube-api-access-mlbnx\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.437892 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/24352f2e-20c2-4d2e-bd18-8fb703441b7b-runner\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.438614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/24352f2e-20c2-4d2e-bd18-8fb703441b7b-runner\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.471752 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbnx\" (UniqueName: \"kubernetes.io/projected/24352f2e-20c2-4d2e-bd18-8fb703441b7b-kube-api-access-mlbnx\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.558312 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.852296 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-gnq9d"] Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.898101 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" event={"ID":"24352f2e-20c2-4d2e-bd18-8fb703441b7b","Type":"ContainerStarted","Data":"4fb4b78617caec80359836c04c6a8b217475eb66c126b9ce646a33a5fb209f9a"} Feb 18 00:23:10 crc kubenswrapper[5121]: I0218 00:23:10.987771 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" event={"ID":"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6","Type":"ContainerStarted","Data":"21e004903732d0a96b12b11f2fa9555552057c187a0fd687b9659f17ced53748"} Feb 18 00:23:10 crc kubenswrapper[5121]: I0218 00:23:10.989450 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" event={"ID":"24352f2e-20c2-4d2e-bd18-8fb703441b7b","Type":"ContainerStarted","Data":"1caf7773b361d7cc0f3bd51335f7e76489fc1bd67336b9116be0bca433cee03f"} Feb 18 00:23:11 crc kubenswrapper[5121]: I0218 00:23:11.010732 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" podStartSLOduration=1.8978910629999999 podStartE2EDuration="15.010710797s" podCreationTimestamp="2026-02-18 00:22:56 +0000 UTC" firstStartedPulling="2026-02-18 00:22:57.557793568 +0000 UTC m=+861.072251323" lastFinishedPulling="2026-02-18 00:23:10.670613322 +0000 UTC m=+874.185071057" observedRunningTime="2026-02-18 00:23:11.008144848 +0000 UTC m=+874.522602603" watchObservedRunningTime="2026-02-18 00:23:11.010710797 +0000 UTC m=+874.525168532" Feb 18 00:23:11 crc kubenswrapper[5121]: I0218 00:23:11.027981 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" podStartSLOduration=1.143850666 podStartE2EDuration="12.027959474s" podCreationTimestamp="2026-02-18 00:22:59 +0000 UTC" firstStartedPulling="2026-02-18 00:22:59.875273529 +0000 UTC m=+863.389731264" lastFinishedPulling="2026-02-18 00:23:10.759382337 +0000 UTC m=+874.273840072" observedRunningTime="2026-02-18 00:23:11.026138895 +0000 UTC m=+874.540596630" watchObservedRunningTime="2026-02-18 00:23:11.027959474 +0000 UTC m=+874.542417209" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.707513 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.718481 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.722286 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.722914 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.723028 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.723093 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.724005 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-mdl7b\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.724281 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.729342 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.737457 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823207 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823499 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823626 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823782 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823901 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.824032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.824247 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925229 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925490 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925572 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925787 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925878 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925980 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.926175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.932957 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.933133 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.950305 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.950376 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.951227 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.955560 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:32 crc kubenswrapper[5121]: I0218 00:23:32.049549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:32 crc kubenswrapper[5121]: I0218 00:23:32.509017 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:23:32 crc kubenswrapper[5121]: W0218 00:23:32.517161 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a43f5a_93d6_4bf5_9595_4b068338fb4b.slice/crio-9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb WatchSource:0}: Error finding container 9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb: Status 404 returned error can't find the container with id 9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb Feb 18 00:23:33 crc kubenswrapper[5121]: I0218 00:23:33.172214 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerStarted","Data":"9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb"} Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.656395 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.664599 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.675472 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.681558 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:23:38 crc kubenswrapper[5121]: I0218 00:23:38.229397 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerStarted","Data":"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044"} Feb 18 00:23:38 crc kubenswrapper[5121]: I0218 00:23:38.267369 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" podStartSLOduration=2.270373654 podStartE2EDuration="7.267340145s" podCreationTimestamp="2026-02-18 00:23:31 +0000 UTC" firstStartedPulling="2026-02-18 00:23:32.519173661 +0000 UTC m=+896.033631406" lastFinishedPulling="2026-02-18 00:23:37.516140122 +0000 UTC m=+901.030597897" observedRunningTime="2026-02-18 00:23:38.255024612 +0000 UTC m=+901.769482347" watchObservedRunningTime="2026-02-18 00:23:38.267340145 +0000 UTC m=+901.781797920" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.065535 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.090008 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.090225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096227 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096350 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-kmb9h\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096459 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096522 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096235 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096474 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096976 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.097284 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.097791 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.098137 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.202948 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config-out\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203005 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-tls-assets\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203064 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203227 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203341 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203382 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26m6w\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-kube-api-access-26m6w\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203488 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203549 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203597 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203625 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203692 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203854 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-web-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305030 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305104 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305142 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305174 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305327 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.305366 5121 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305382 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-web-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.305531 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls podName:7acc81c6-6ef1-4c1d-ac51-c020076734e6 nodeName:}" failed. No retries permitted until 2026-02-18 00:23:42.805488587 +0000 UTC m=+906.319946352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7acc81c6-6ef1-4c1d-ac51-c020076734e6") : secret "default-prometheus-proxy-tls" not found Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config-out\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305815 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-tls-assets\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305906 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305998 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306067 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306100 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26m6w\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-kube-api-access-26m6w\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306825 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.307618 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.307715 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.313827 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.314688 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config-out\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.315049 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.315107 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5aa9db2d83b79f6d98384877ca4fd57474ec19f48c0eca6a401cef73b2c9bec/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.319251 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-web-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.328323 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-tls-assets\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.329394 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.352850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.355005 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26m6w\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-kube-api-access-26m6w\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.815213 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.815371 5121 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.815450 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls podName:7acc81c6-6ef1-4c1d-ac51-c020076734e6 nodeName:}" failed. No retries permitted until 2026-02-18 00:23:43.815430073 +0000 UTC m=+907.329887808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7acc81c6-6ef1-4c1d-ac51-c020076734e6") : secret "default-prometheus-proxy-tls" not found Feb 18 00:23:43 crc kubenswrapper[5121]: I0218 00:23:43.830365 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:43 crc kubenswrapper[5121]: I0218 00:23:43.835285 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:43 crc kubenswrapper[5121]: I0218 00:23:43.913881 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 18 00:23:44 crc kubenswrapper[5121]: I0218 00:23:44.155439 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 18 00:23:44 crc kubenswrapper[5121]: W0218 00:23:44.160773 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7acc81c6_6ef1_4c1d_ac51_c020076734e6.slice/crio-927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee WatchSource:0}: Error finding container 927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee: Status 404 returned error can't find the container with id 927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee Feb 18 00:23:44 crc kubenswrapper[5121]: I0218 00:23:44.271562 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee"} Feb 18 00:23:50 crc kubenswrapper[5121]: I0218 00:23:50.323617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"6abd7e75779b55839559a7d2f7cf46bc4e336d6a45f00c07d7db5aff739cfc56"} Feb 18 00:23:51 crc kubenswrapper[5121]: I0218 00:23:51.924491 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2"] Feb 18 00:23:51 crc kubenswrapper[5121]: I0218 00:23:51.949272 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2"] Feb 18 00:23:51 crc kubenswrapper[5121]: I0218 00:23:51.949394 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.067887 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2vq\" (UniqueName: \"kubernetes.io/projected/37bc1d59-8b60-48c3-aabd-f9337333ef2b-kube-api-access-gq2vq\") pod \"default-snmp-webhook-6774d8dfbc-7plz2\" (UID: \"37bc1d59-8b60-48c3-aabd-f9337333ef2b\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.170122 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2vq\" (UniqueName: \"kubernetes.io/projected/37bc1d59-8b60-48c3-aabd-f9337333ef2b-kube-api-access-gq2vq\") pod \"default-snmp-webhook-6774d8dfbc-7plz2\" (UID: \"37bc1d59-8b60-48c3-aabd-f9337333ef2b\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.189303 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2vq\" (UniqueName: \"kubernetes.io/projected/37bc1d59-8b60-48c3-aabd-f9337333ef2b-kube-api-access-gq2vq\") pod \"default-snmp-webhook-6774d8dfbc-7plz2\" (UID: \"37bc1d59-8b60-48c3-aabd-f9337333ef2b\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.271214 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.776258 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2"] Feb 18 00:23:53 crc kubenswrapper[5121]: I0218 00:23:53.346470 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" event={"ID":"37bc1d59-8b60-48c3-aabd-f9337333ef2b","Type":"ContainerStarted","Data":"8619e4999ff9f2ab0ee03dc36aeb42442c2880e72a87b2b7f106b2391f128baa"} Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.787197 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.905175 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.905333 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.908833 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.909599 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.910311 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.910622 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.911133 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-6vp2p\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.912679 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033099 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033228 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-out\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033273 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-volume\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033379 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033438 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033479 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hpz\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-kube-api-access-j9hpz\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033570 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-web-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033694 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.134900 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.134943 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.134967 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hpz\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-kube-api-access-j9hpz\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.135008 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-web-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.135132 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.135184 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.135406 5121 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.135491 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls podName:36845eb3-f7ec-4a0f-81ca-6650cc34a86d nodeName:}" failed. No retries permitted until 2026-02-18 00:23:56.635463685 +0000 UTC m=+920.149921430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "36845eb3-f7ec-4a0f-81ca-6650cc34a86d") : secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.136103 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.136145 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-out\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.136164 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-volume\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.139849 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.140254 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66ff35c8059af9e4c4e52464365163bf5bb0a4b624024d7bea302fe3d0e72496/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.141140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.141200 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-volume\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.145012 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-web-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.148276 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.148673 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-out\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.155751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hpz\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-kube-api-access-j9hpz\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.155853 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.178451 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.642079 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.642336 5121 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.642456 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls podName:36845eb3-f7ec-4a0f-81ca-6650cc34a86d nodeName:}" failed. No retries permitted until 2026-02-18 00:23:57.642431169 +0000 UTC m=+921.156888904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "36845eb3-f7ec-4a0f-81ca-6650cc34a86d") : secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:57 crc kubenswrapper[5121]: I0218 00:23:57.378390 5121 generic.go:358] "Generic (PLEG): container finished" podID="7acc81c6-6ef1-4c1d-ac51-c020076734e6" containerID="6abd7e75779b55839559a7d2f7cf46bc4e336d6a45f00c07d7db5aff739cfc56" exitCode=0 Feb 18 00:23:57 crc kubenswrapper[5121]: I0218 00:23:57.378480 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerDied","Data":"6abd7e75779b55839559a7d2f7cf46bc4e336d6a45f00c07d7db5aff739cfc56"} Feb 18 00:23:57 crc kubenswrapper[5121]: I0218 00:23:57.655234 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:57 crc kubenswrapper[5121]: E0218 00:23:57.655493 5121 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:57 crc kubenswrapper[5121]: E0218 00:23:57.655571 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls podName:36845eb3-f7ec-4a0f-81ca-6650cc34a86d nodeName:}" failed. No retries permitted until 2026-02-18 00:23:59.655548602 +0000 UTC m=+923.170006367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "36845eb3-f7ec-4a0f-81ca-6650cc34a86d") : secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:59 crc kubenswrapper[5121]: I0218 00:23:59.687216 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:59 crc kubenswrapper[5121]: I0218 00:23:59.695216 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:59 crc kubenswrapper[5121]: I0218 00:23:59.828042 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.136610 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.317223 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.317331 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.320402 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.320806 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.320851 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.395628 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"auto-csr-approver-29522904-frzvq\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.497086 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"auto-csr-approver-29522904-frzvq\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.523964 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"auto-csr-approver-29522904-frzvq\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.634377 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:01 crc kubenswrapper[5121]: I0218 00:24:01.289233 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:24:01 crc kubenswrapper[5121]: W0218 00:24:01.431306 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb912abb_9dfb_4035_9eea_266ad0057af0.slice/crio-80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73 WatchSource:0}: Error finding container 80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73: Status 404 returned error can't find the container with id 80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73 Feb 18 00:24:01 crc kubenswrapper[5121]: W0218 00:24:01.473572 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36845eb3_f7ec_4a0f_81ca_6650cc34a86d.slice/crio-2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee WatchSource:0}: Error finding container 2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee: Status 404 returned error can't find the container with id 2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee Feb 18 00:24:01 crc kubenswrapper[5121]: I0218 00:24:01.475331 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.411427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerStarted","Data":"80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73"} Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.413600 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" event={"ID":"37bc1d59-8b60-48c3-aabd-f9337333ef2b","Type":"ContainerStarted","Data":"019479f7141b25a04ee0c3699d1e1a9765a5d52e9739da147c7c5f952a54a895"} Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.417633 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee"} Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.426436 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" podStartSLOduration=2.67786527 podStartE2EDuration="11.426414803s" podCreationTimestamp="2026-02-18 00:23:51 +0000 UTC" firstStartedPulling="2026-02-18 00:23:52.781344133 +0000 UTC m=+916.295801878" lastFinishedPulling="2026-02-18 00:24:01.529893676 +0000 UTC m=+925.044351411" observedRunningTime="2026-02-18 00:24:02.424856451 +0000 UTC m=+925.939314196" watchObservedRunningTime="2026-02-18 00:24:02.426414803 +0000 UTC m=+925.940872548" Feb 18 00:24:03 crc kubenswrapper[5121]: I0218 00:24:03.427756 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed"} Feb 18 00:24:06 crc kubenswrapper[5121]: I0218 00:24:06.447037 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerStarted","Data":"6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff"} Feb 18 00:24:06 crc kubenswrapper[5121]: I0218 00:24:06.449333 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"bd46fd6abe11b67ea91c607f7d6a4a27bbf2ef814f2c34b3732767d685580aae"} Feb 18 00:24:06 crc kubenswrapper[5121]: I0218 00:24:06.459581 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29522904-frzvq" podStartSLOduration=2.540293206 podStartE2EDuration="6.459562396s" podCreationTimestamp="2026-02-18 00:24:00 +0000 UTC" firstStartedPulling="2026-02-18 00:24:01.432705865 +0000 UTC m=+924.947163600" lastFinishedPulling="2026-02-18 00:24:05.351975055 +0000 UTC m=+928.866432790" observedRunningTime="2026-02-18 00:24:06.457960763 +0000 UTC m=+929.972418498" watchObservedRunningTime="2026-02-18 00:24:06.459562396 +0000 UTC m=+929.974020151" Feb 18 00:24:07 crc kubenswrapper[5121]: I0218 00:24:07.457504 5121 generic.go:358] "Generic (PLEG): container finished" podID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerID="6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff" exitCode=0 Feb 18 00:24:07 crc kubenswrapper[5121]: I0218 00:24:07.457943 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerDied","Data":"6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff"} Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.462985 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r"] Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.509449 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"a2317347a2371f7f72d7583626cc53fa549b6cc8af923a9a4fbb4f967bfe4e74"} Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.509778 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r"] Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.509564 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.512299 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.512678 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.512814 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-jq5n2\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.513875 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643316 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3a752ce6-d6e6-4222-9c73-8f79a4272c55-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643434 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643470 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a752ce6-d6e6-4222-9c73-8f79a4272c55-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643564 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643666 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n2sq\" (UniqueName: \"kubernetes.io/projected/3a752ce6-d6e6-4222-9c73-8f79a4272c55-kube-api-access-5n2sq\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745230 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3a752ce6-d6e6-4222-9c73-8f79a4272c55-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745300 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745328 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a752ce6-d6e6-4222-9c73-8f79a4272c55-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745385 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745431 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5n2sq\" (UniqueName: \"kubernetes.io/projected/3a752ce6-d6e6-4222-9c73-8f79a4272c55-kube-api-access-5n2sq\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: E0218 00:24:08.745481 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:08 crc kubenswrapper[5121]: E0218 00:24:08.745558 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls podName:3a752ce6-d6e6-4222-9c73-8f79a4272c55 nodeName:}" failed. No retries permitted until 2026-02-18 00:24:09.245538355 +0000 UTC m=+932.759996090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" (UID: "3a752ce6-d6e6-4222-9c73-8f79a4272c55") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.746035 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a752ce6-d6e6-4222-9c73-8f79a4272c55-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.746604 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3a752ce6-d6e6-4222-9c73-8f79a4272c55-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.752495 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.770488 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n2sq\" (UniqueName: \"kubernetes.io/projected/3a752ce6-d6e6-4222-9c73-8f79a4272c55-kube-api-access-5n2sq\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.806297 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.946912 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"fb912abb-9dfb-4035-9eea-266ad0057af0\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.955202 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28" (OuterVolumeSpecName: "kube-api-access-xgj28") pod "fb912abb-9dfb-4035-9eea-266ad0057af0" (UID: "fb912abb-9dfb-4035-9eea-266ad0057af0"). InnerVolumeSpecName "kube-api-access-xgj28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.048102 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.250623 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:09 crc kubenswrapper[5121]: E0218 00:24:09.250827 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:09 crc kubenswrapper[5121]: E0218 00:24:09.250944 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls podName:3a752ce6-d6e6-4222-9c73-8f79a4272c55 nodeName:}" failed. No retries permitted until 2026-02-18 00:24:10.250922315 +0000 UTC m=+933.765380050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" (UID: "3a752ce6-d6e6-4222-9c73-8f79a4272c55") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.475507 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.475565 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerDied","Data":"80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73"} Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.475700 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.529111 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.534806 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.266224 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.278335 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.324808 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:10 crc kubenswrapper[5121]: E0218 00:24:10.390172 5121 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36845eb3_f7ec_4a0f_81ca_6650cc34a86d.slice/crio-conmon-81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.501435 5121 generic.go:358] "Generic (PLEG): container finished" podID="36845eb3-f7ec-4a0f-81ca-6650cc34a86d" containerID="81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed" exitCode=0 Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.501486 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerDied","Data":"81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed"} Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.756066 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r"] Feb 18 00:24:10 crc kubenswrapper[5121]: W0218 00:24:10.761193 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a752ce6_d6e6_4222_9c73_8f79a4272c55.slice/crio-d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c WatchSource:0}: Error finding container d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c: Status 404 returned error can't find the container with id d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.031111 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj"] Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.035304 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerName="oc" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.035346 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerName="oc" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.035507 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerName="oc" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.060164 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj"] Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.060318 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.063154 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.063895 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182203 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182254 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khnnw\" (UniqueName: \"kubernetes.io/projected/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-kube-api-access-khnnw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182316 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182410 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182435 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288367 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288546 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288597 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288697 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288761 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-khnnw\" (UniqueName: \"kubernetes.io/projected/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-kube-api-access-khnnw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.289135 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.289253 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls podName:91bcc3e0-8b13-4cb5-a115-01265bb95b3a nodeName:}" failed. No retries permitted until 2026-02-18 00:24:11.789227109 +0000 UTC m=+935.303684844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" (UID: "91bcc3e0-8b13-4cb5-a115-01265bb95b3a") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.290386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.290716 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.299849 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.305785 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" path="/var/lib/kubelet/pods/0752b905-c20c-4af0-a716-b5297e9ed6fc/volumes" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.347434 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-khnnw\" (UniqueName: \"kubernetes.io/projected/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-kube-api-access-khnnw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.510166 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c"} Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.638303 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.655385 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.685318 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.801122 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.801260 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls podName:91bcc3e0-8b13-4cb5-a115-01265bb95b3a nodeName:}" failed. No retries permitted until 2026-02-18 00:24:12.801237708 +0000 UTC m=+936.315695443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" (UID: "91bcc3e0-8b13-4cb5-a115-01265bb95b3a") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.800955 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.801879 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.801963 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.802061 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.903368 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.903503 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.903538 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.904696 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.904738 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.927829 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.009733 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.535301 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:12 crc kubenswrapper[5121]: W0218 00:24:12.556131 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a327dc_9b2e_492e_b906_456dbc2fc6a8.slice/crio-2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d WatchSource:0}: Error finding container 2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d: Status 404 returned error can't find the container with id 2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.836259 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.845332 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.881353 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:13 crc kubenswrapper[5121]: I0218 00:24:13.529583 5121 generic.go:358] "Generic (PLEG): container finished" podID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerID="98708d1792476c4c66f1f72b097c066ee9a0a22f45820501a8f20ad24e6ea16b" exitCode=0 Feb 18 00:24:13 crc kubenswrapper[5121]: I0218 00:24:13.529642 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"98708d1792476c4c66f1f72b097c066ee9a0a22f45820501a8f20ad24e6ea16b"} Feb 18 00:24:13 crc kubenswrapper[5121]: I0218 00:24:13.529729 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerStarted","Data":"2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d"} Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.342106 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94"] Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.565594 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94"] Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.565741 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.567949 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.567988 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.675948 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjp4\" (UniqueName: \"kubernetes.io/projected/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-kube-api-access-ksjp4\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.675995 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.676026 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.676052 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.676074 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777783 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777841 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777933 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjp4\" (UniqueName: \"kubernetes.io/projected/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-kube-api-access-ksjp4\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777960 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777978 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: E0218 00:24:15.778135 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:15 crc kubenswrapper[5121]: E0218 00:24:15.778198 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls podName:0f0eb637-4674-4fad-bb8e-e0b7d5ac913b nodeName:}" failed. No retries permitted until 2026-02-18 00:24:16.278181869 +0000 UTC m=+939.792639604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" (UID: "0f0eb637-4674-4fad-bb8e-e0b7d5ac913b") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.778559 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.779000 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.784193 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.794903 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjp4\" (UniqueName: \"kubernetes.io/projected/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-kube-api-access-ksjp4\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:16 crc kubenswrapper[5121]: I0218 00:24:16.286196 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:16 crc kubenswrapper[5121]: E0218 00:24:16.286384 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:16 crc kubenswrapper[5121]: E0218 00:24:16.286754 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls podName:0f0eb637-4674-4fad-bb8e-e0b7d5ac913b nodeName:}" failed. No retries permitted until 2026-02-18 00:24:17.286733646 +0000 UTC m=+940.801191371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" (UID: "0f0eb637-4674-4fad-bb8e-e0b7d5ac913b") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:17 crc kubenswrapper[5121]: I0218 00:24:17.306679 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:17 crc kubenswrapper[5121]: I0218 00:24:17.316427 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:17 crc kubenswrapper[5121]: I0218 00:24:17.384104 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:18 crc kubenswrapper[5121]: I0218 00:24:18.244743 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj"] Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.469520 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94"] Feb 18 00:24:19 crc kubenswrapper[5121]: W0218 00:24:19.470455 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f0eb637_4674_4fad_bb8e_e0b7d5ac913b.slice/crio-7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9 WatchSource:0}: Error finding container 7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9: Status 404 returned error can't find the container with id 7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9 Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.573831 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.575980 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"071b5e39dd4ac4914a52e120b41b6f781f26dfc0c5e96684364c62394b496601"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.577823 5121 generic.go:358] "Generic (PLEG): container finished" podID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerID="41fcfcabb578c6900831983846be283ff2f84b626213226dad02d958bc28e3e4" exitCode=0 Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.577856 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"41fcfcabb578c6900831983846be283ff2f84b626213226dad02d958bc28e3e4"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.580681 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"8ed2cd045ceb026d04ae32346ac2f322abf898e75bbf09092d3aa9b392e18a1b"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.586376 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"fbe403427ad4c94ed9ebfde76600b3526a2e3fd32848bd54582fde0e4ca7bfac"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.600836 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"50d14054aa7d1fa3bac45fca8f3330519f05dc6f5e47e7292cfa22441815e0e2"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.600875 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"d569ccb340feefe343714d179e6002dcef1c06be840690b75462843418dfb554"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.629834 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=3.732579142 podStartE2EDuration="38.629817118s" podCreationTimestamp="2026-02-18 00:23:41 +0000 UTC" firstStartedPulling="2026-02-18 00:23:44.164326481 +0000 UTC m=+907.678784226" lastFinishedPulling="2026-02-18 00:24:19.061564467 +0000 UTC m=+942.576022202" observedRunningTime="2026-02-18 00:24:19.619829468 +0000 UTC m=+943.134287213" watchObservedRunningTime="2026-02-18 00:24:19.629817118 +0000 UTC m=+943.144274853" Feb 18 00:24:20 crc kubenswrapper[5121]: I0218 00:24:20.636016 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"fe76f828030cc82e9fe77ba56db235ef3083eb14713524748290777b0e579992"} Feb 18 00:24:20 crc kubenswrapper[5121]: I0218 00:24:20.643712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerStarted","Data":"7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880"} Feb 18 00:24:20 crc kubenswrapper[5121]: I0218 00:24:20.675458 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7j85x" podStartSLOduration=4.140915838 podStartE2EDuration="9.675445332s" podCreationTimestamp="2026-02-18 00:24:11 +0000 UTC" firstStartedPulling="2026-02-18 00:24:13.530502157 +0000 UTC m=+937.044959902" lastFinishedPulling="2026-02-18 00:24:19.065031671 +0000 UTC m=+942.579489396" observedRunningTime="2026-02-18 00:24:20.673762437 +0000 UTC m=+944.188220172" watchObservedRunningTime="2026-02-18 00:24:20.675445332 +0000 UTC m=+944.189903067" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.011543 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.011585 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.066376 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.191270 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7"] Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.968240 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7"] Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.968861 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.973076 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.973843 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.002466 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddkj\" (UniqueName: \"kubernetes.io/projected/de3c7540-7b8d-4e77-968d-68b42aecf4df-kube-api-access-2ddkj\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.002606 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3c7540-7b8d-4e77-968d-68b42aecf4df-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.002764 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/de3c7540-7b8d-4e77-968d-68b42aecf4df-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.003088 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/de3c7540-7b8d-4e77-968d-68b42aecf4df-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104203 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3c7540-7b8d-4e77-968d-68b42aecf4df-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104315 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/de3c7540-7b8d-4e77-968d-68b42aecf4df-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104466 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/de3c7540-7b8d-4e77-968d-68b42aecf4df-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104528 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddkj\" (UniqueName: \"kubernetes.io/projected/de3c7540-7b8d-4e77-968d-68b42aecf4df-kube-api-access-2ddkj\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.105587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/de3c7540-7b8d-4e77-968d-68b42aecf4df-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.105664 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3c7540-7b8d-4e77-968d-68b42aecf4df-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.112713 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv"] Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.119290 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/de3c7540-7b8d-4e77-968d-68b42aecf4df-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.131318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddkj\" (UniqueName: \"kubernetes.io/projected/de3c7540-7b8d-4e77-968d-68b42aecf4df-kube-api-access-2ddkj\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.210327 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv"] Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.210571 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.214906 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.289378 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306543 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stnl\" (UniqueName: \"kubernetes.io/projected/906f1c26-b94f-41a4-98f4-524412eb9029-kube-api-access-6stnl\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306603 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/906f1c26-b94f-41a4-98f4-524412eb9029-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306768 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/906f1c26-b94f-41a4-98f4-524412eb9029-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306791 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/906f1c26-b94f-41a4-98f4-524412eb9029-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.408814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/906f1c26-b94f-41a4-98f4-524412eb9029-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.408880 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/906f1c26-b94f-41a4-98f4-524412eb9029-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.408940 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6stnl\" (UniqueName: \"kubernetes.io/projected/906f1c26-b94f-41a4-98f4-524412eb9029-kube-api-access-6stnl\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.409003 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/906f1c26-b94f-41a4-98f4-524412eb9029-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.409740 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/906f1c26-b94f-41a4-98f4-524412eb9029-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.410074 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/906f1c26-b94f-41a4-98f4-524412eb9029-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.413896 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/906f1c26-b94f-41a4-98f4-524412eb9029-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.444282 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stnl\" (UniqueName: \"kubernetes.io/projected/906f1c26-b94f-41a4-98f4-524412eb9029-kube-api-access-6stnl\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.533019 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.575105 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7"] Feb 18 00:24:23 crc kubenswrapper[5121]: W0218 00:24:23.585795 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde3c7540_7b8d_4e77_968d_68b42aecf4df.slice/crio-11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d WatchSource:0}: Error finding container 11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d: Status 404 returned error can't find the container with id 11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.675529 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d"} Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.914489 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.958936 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv"] Feb 18 00:24:23 crc kubenswrapper[5121]: W0218 00:24:23.966638 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906f1c26_b94f_41a4_98f4_524412eb9029.slice/crio-37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10 WatchSource:0}: Error finding container 37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10: Status 404 returned error can't find the container with id 37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10 Feb 18 00:24:24 crc kubenswrapper[5121]: I0218 00:24:24.698518 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"0f6862c1c325e0eebb91cb6d731dcea060c881df94bf5f53b5a256341e498345"} Feb 18 00:24:24 crc kubenswrapper[5121]: I0218 00:24:24.700686 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10"} Feb 18 00:24:28 crc kubenswrapper[5121]: I0218 00:24:28.914163 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:28 crc kubenswrapper[5121]: I0218 00:24:28.973426 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:29 crc kubenswrapper[5121]: I0218 00:24:29.755512 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"a42d833080ebec1d54c021dfee797b5eabef0de6aea3b852f2b616a7a362a42c"} Feb 18 00:24:29 crc kubenswrapper[5121]: I0218 00:24:29.815594 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.415989117 podStartE2EDuration="35.815572765s" podCreationTimestamp="2026-02-18 00:23:54 +0000 UTC" firstStartedPulling="2026-02-18 00:24:10.502549725 +0000 UTC m=+934.017007460" lastFinishedPulling="2026-02-18 00:24:25.902133343 +0000 UTC m=+949.416591108" observedRunningTime="2026-02-18 00:24:29.785726277 +0000 UTC m=+953.300184022" watchObservedRunningTime="2026-02-18 00:24:29.815572765 +0000 UTC m=+953.330030510" Feb 18 00:24:29 crc kubenswrapper[5121]: I0218 00:24:29.820252 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.767611 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.776680 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.780588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.788202 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.793500 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70"} Feb 18 00:24:33 crc kubenswrapper[5121]: I0218 00:24:33.718720 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:33 crc kubenswrapper[5121]: I0218 00:24:33.760837 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:33 crc kubenswrapper[5121]: I0218 00:24:33.816314 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7j85x" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" containerID="cri-o://7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880" gracePeriod=2 Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.545071 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.545171 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.841859 5121 generic.go:358] "Generic (PLEG): container finished" podID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerID="7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880" exitCode=0 Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.841937 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880"} Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.913244 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.968535 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.968758 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" containerID="cri-o://999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" gracePeriod=30 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.031566 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.031601 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.031670 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.032626 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities" (OuterVolumeSpecName: "utilities") pod "28a327dc-9b2e-492e-b906-456dbc2fc6a8" (UID: "28a327dc-9b2e-492e-b906-456dbc2fc6a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.037503 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk" (OuterVolumeSpecName: "kube-api-access-6w7lk") pod "28a327dc-9b2e-492e-b906-456dbc2fc6a8" (UID: "28a327dc-9b2e-492e-b906-456dbc2fc6a8"). InnerVolumeSpecName "kube-api-access-6w7lk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.092745 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28a327dc-9b2e-492e-b906-456dbc2fc6a8" (UID: "28a327dc-9b2e-492e-b906-456dbc2fc6a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.132718 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.132759 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.132770 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.303579 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.340208 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.340522 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.340851 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341044 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341204 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341438 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341614 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341977 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.342741 5121 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.347349 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg" (OuterVolumeSpecName: "kube-api-access-2fxvg") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "kube-api-access-2fxvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.347575 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.347759 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.352592 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.352888 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.353025 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.360717 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jpbx6"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361392 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-utilities" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361408 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-utilities" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361425 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361430 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361452 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361458 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361474 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-content" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361479 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-content" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361586 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361607 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.370358 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jpbx6"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.370488 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444237 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444528 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444709 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444810 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwwp\" (UniqueName: \"kubernetes.io/projected/58efa647-6d57-485a-89c5-66d831cf05c5-kube-api-access-9pwwp\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444920 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445006 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-users\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445107 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-config\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445230 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445297 5121 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445358 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445410 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445469 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445526 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547271 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547374 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwwp\" (UniqueName: \"kubernetes.io/projected/58efa647-6d57-485a-89c5-66d831cf05c5-kube-api-access-9pwwp\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547414 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547446 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-users\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547479 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-config\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547526 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547592 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.550627 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-config\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.551595 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.552040 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.552462 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.552891 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.554789 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-users\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.566106 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwwp\" (UniqueName: \"kubernetes.io/projected/58efa647-6d57-485a-89c5-66d831cf05c5-kube-api-access-9pwwp\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.688809 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.854840 5121 generic.go:358] "Generic (PLEG): container finished" podID="906f1c26-b94f-41a4-98f4-524412eb9029" containerID="cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.854965 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerDied","Data":"cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.855032 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"efda4950061f540a7ffd72f347e5c4a18557ebe47ccf07ffb1b1a9fab3211bae"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.856996 5121 scope.go:117] "RemoveContainer" containerID="cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857436 5121 generic.go:358] "Generic (PLEG): container finished" podID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857516 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857571 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerDied","Data":"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857642 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerDied","Data":"9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857714 5121 scope.go:117] "RemoveContainer" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.858222 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.863888 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.863953 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.873363 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" containerID="1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.873435 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerDied","Data":"1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.873510 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"6963696d1f9a84a281118d3169e896ad36a6021a3604a72ec4a3182e1c91767c"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.875085 5121 scope.go:117] "RemoveContainer" containerID="1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.878206 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"cefa498b634ef72c709f357711a1f7d1acabd66398276efd8fbb5fcfe560ed88"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.890560 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3c7540-7b8d-4e77-968d-68b42aecf4df" containerID="061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.890737 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerDied","Data":"061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.890774 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"d3bd4c142e299f09b3938e52b63ceb1cf56ec80e91344bf1cd5dc1a6097057bd"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.891745 5121 scope.go:117] "RemoveContainer" containerID="061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.900238 5121 scope.go:117] "RemoveContainer" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" Feb 18 00:24:35 crc kubenswrapper[5121]: E0218 00:24:35.902241 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044\": container with ID starting with 999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044 not found: ID does not exist" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.902283 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044"} err="failed to get container status \"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044\": rpc error: code = NotFound desc = could not find container \"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044\": container with ID starting with 999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044 not found: ID does not exist" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.902311 5121 scope.go:117] "RemoveContainer" containerID="7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.909036 5121 generic.go:358] "Generic (PLEG): container finished" podID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" containerID="43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.909121 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerDied","Data":"43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.909187 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"57290db7656ed8c68e6185d925943f220702a9c2cba0ca8af863658c2a3f1ef0"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.910862 5121 scope.go:117] "RemoveContainer" containerID="43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.943306 5121 scope.go:117] "RemoveContainer" containerID="41fcfcabb578c6900831983846be283ff2f84b626213226dad02d958bc28e3e4" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.952841 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.960543 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.970954 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" podStartSLOduration=9.147757179 podStartE2EDuration="24.970935783s" podCreationTimestamp="2026-02-18 00:24:11 +0000 UTC" firstStartedPulling="2026-02-18 00:24:18.85407422 +0000 UTC m=+942.368531975" lastFinishedPulling="2026-02-18 00:24:34.677252844 +0000 UTC m=+958.191710579" observedRunningTime="2026-02-18 00:24:35.964226022 +0000 UTC m=+959.478683757" watchObservedRunningTime="2026-02-18 00:24:35.970935783 +0000 UTC m=+959.485393518" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.980177 5121 scope.go:117] "RemoveContainer" containerID="98708d1792476c4c66f1f72b097c066ee9a0a22f45820501a8f20ad24e6ea16b" Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.013608 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.018871 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.229173 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jpbx6"] Feb 18 00:24:36 crc kubenswrapper[5121]: W0218 00:24:36.229736 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58efa647_6d57_485a_89c5_66d831cf05c5.slice/crio-386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3 WatchSource:0}: Error finding container 386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3: Status 404 returned error can't find the container with id 386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3 Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.918282 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.921179 5121 generic.go:358] "Generic (PLEG): container finished" podID="91bcc3e0-8b13-4cb5-a115-01265bb95b3a" containerID="ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247" exitCode=0 Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.921302 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerDied","Data":"ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.921966 5121 scope.go:117] "RemoveContainer" containerID="ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247" Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.923792 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.932159 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.934853 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.940341 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" event={"ID":"58efa647-6d57-485a-89c5-66d831cf05c5","Type":"ContainerStarted","Data":"9f776baaa4499d30b59a74c49747fa59bd41448eb4eae9a5065731be6acbdf23"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.940405 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" event={"ID":"58efa647-6d57-485a-89c5-66d831cf05c5","Type":"ContainerStarted","Data":"386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.950734 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" podStartSLOduration=3.3765267310000002 podStartE2EDuration="28.950716744s" podCreationTimestamp="2026-02-18 00:24:08 +0000 UTC" firstStartedPulling="2026-02-18 00:24:10.762806009 +0000 UTC m=+934.277263744" lastFinishedPulling="2026-02-18 00:24:36.336996002 +0000 UTC m=+959.851453757" observedRunningTime="2026-02-18 00:24:36.94463172 +0000 UTC m=+960.459089465" watchObservedRunningTime="2026-02-18 00:24:36.950716744 +0000 UTC m=+960.465174489" Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.985281 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" podStartSLOduration=1.675015187 podStartE2EDuration="13.98526786s" podCreationTimestamp="2026-02-18 00:24:23 +0000 UTC" firstStartedPulling="2026-02-18 00:24:23.968777279 +0000 UTC m=+947.483235014" lastFinishedPulling="2026-02-18 00:24:36.279029952 +0000 UTC m=+959.793487687" observedRunningTime="2026-02-18 00:24:36.981819946 +0000 UTC m=+960.496277691" watchObservedRunningTime="2026-02-18 00:24:36.98526786 +0000 UTC m=+960.499725595" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.017862 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" podStartSLOduration=4.991627983 podStartE2EDuration="22.017841111s" podCreationTimestamp="2026-02-18 00:24:15 +0000 UTC" firstStartedPulling="2026-02-18 00:24:19.475016029 +0000 UTC m=+942.989473764" lastFinishedPulling="2026-02-18 00:24:36.501229147 +0000 UTC m=+960.015686892" observedRunningTime="2026-02-18 00:24:37.009815824 +0000 UTC m=+960.524273569" watchObservedRunningTime="2026-02-18 00:24:37.017841111 +0000 UTC m=+960.532298866" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.056347 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" podStartSLOduration=2.326189145 podStartE2EDuration="15.056323724s" podCreationTimestamp="2026-02-18 00:24:22 +0000 UTC" firstStartedPulling="2026-02-18 00:24:23.587324524 +0000 UTC m=+947.101782259" lastFinishedPulling="2026-02-18 00:24:36.317459083 +0000 UTC m=+959.831916838" observedRunningTime="2026-02-18 00:24:37.05030199 +0000 UTC m=+960.564759735" watchObservedRunningTime="2026-02-18 00:24:37.056323724 +0000 UTC m=+960.570781469" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.077426 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" podStartSLOduration=3.077396254 podStartE2EDuration="3.077396254s" podCreationTimestamp="2026-02-18 00:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:24:37.068954985 +0000 UTC m=+960.583412740" watchObservedRunningTime="2026-02-18 00:24:37.077396254 +0000 UTC m=+960.591854039" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.279803 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" path="/var/lib/kubelet/pods/28a327dc-9b2e-492e-b906-456dbc2fc6a8/volumes" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.280858 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" path="/var/lib/kubelet/pods/e1a43f5a-93d6-4bf5-9595-4b068338fb4b/volumes" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948189 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948262 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerDied","Data":"035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948554 5121 scope.go:117] "RemoveContainer" containerID="1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948837 5121 scope.go:117] "RemoveContainer" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.949209 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_service-telemetry(3a752ce6-d6e6-4222-9c73-8f79a4272c55)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" podUID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.957339 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"d6f65c4bca08644816b9ab16ef0e781c2411f29df16531d608f70f576a379950"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.964062 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3c7540-7b8d-4e77-968d-68b42aecf4df" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.964157 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerDied","Data":"78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.964793 5121 scope.go:117] "RemoveContainer" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.965102 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_service-telemetry(de3c7540-7b8d-4e77-968d-68b42aecf4df)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" podUID="de3c7540-7b8d-4e77-968d-68b42aecf4df" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.972880 5121 generic.go:358] "Generic (PLEG): container finished" podID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.973144 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerDied","Data":"101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.973784 5121 scope.go:117] "RemoveContainer" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.974131 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_service-telemetry(0f0eb637-4674-4fad-bb8e-e0b7d5ac913b)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" podUID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.976910 5121 generic.go:358] "Generic (PLEG): container finished" podID="906f1c26-b94f-41a4-98f4-524412eb9029" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.977885 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerDied","Data":"bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.978032 5121 scope.go:117] "RemoveContainer" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.978214 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_service-telemetry(906f1c26-b94f-41a4-98f4-524412eb9029)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" podUID="906f1c26-b94f-41a4-98f4-524412eb9029" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.009377 5121 scope.go:117] "RemoveContainer" containerID="061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.072813 5121 scope.go:117] "RemoveContainer" containerID="43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.108396 5121 scope.go:117] "RemoveContainer" containerID="cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.987083 5121 scope.go:117] "RemoveContainer" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.987561 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_service-telemetry(de3c7540-7b8d-4e77-968d-68b42aecf4df)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" podUID="de3c7540-7b8d-4e77-968d-68b42aecf4df" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.989126 5121 scope.go:117] "RemoveContainer" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.989541 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_service-telemetry(0f0eb637-4674-4fad-bb8e-e0b7d5ac913b)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" podUID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.991022 5121 scope.go:117] "RemoveContainer" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.991218 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_service-telemetry(906f1c26-b94f-41a4-98f4-524412eb9029)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" podUID="906f1c26-b94f-41a4-98f4-524412eb9029" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.995438 5121 scope.go:117] "RemoveContainer" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.995745 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_service-telemetry(3a752ce6-d6e6-4222-9c73-8f79a4272c55)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" podUID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" Feb 18 00:24:42 crc kubenswrapper[5121]: I0218 00:24:42.661716 5121 scope.go:117] "RemoveContainer" containerID="6df8e5d37ed8641c59178b1b8167978f4db2c4f4c7a2d5703ab6d4d5d7849eea" Feb 18 00:24:50 crc kubenswrapper[5121]: I0218 00:24:50.271077 5121 scope.go:117] "RemoveContainer" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" Feb 18 00:24:50 crc kubenswrapper[5121]: I0218 00:24:50.271700 5121 scope.go:117] "RemoveContainer" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" Feb 18 00:24:51 crc kubenswrapper[5121]: I0218 00:24:51.274626 5121 scope.go:117] "RemoveContainer" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.091078 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"e50909e265bf8cc9b537e1d96694f2bc08543f924a3141fc31bd355be65ae0a4"} Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.093822 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"937a181aeb5d024a886e741706ae7b0ea76c42904b602cdffcbbc6ca17c4fcdb"} Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.096080 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"3c1d34b2fbe278dcf6aa69e7f6f959730166231d83dee8f99d1c10dbae02f0d4"} Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.270859 5121 scope.go:117] "RemoveContainer" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" Feb 18 00:24:53 crc kubenswrapper[5121]: I0218 00:24:53.108814 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"067271c43756789b2f5a8776ee364b26c15cb8eac1f9152fa0904c40c95d5639"} Feb 18 00:25:04 crc kubenswrapper[5121]: I0218 00:25:04.548867 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:25:04 crc kubenswrapper[5121]: I0218 00:25:04.550149 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:25:05 crc kubenswrapper[5121]: I0218 00:25:05.326834 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.361848 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.362024 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.365145 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.366030 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.459544 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/660e52e5-64b3-47d1-b593-e7a50159a146-qdr-test-config\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.459593 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/660e52e5-64b3-47d1-b593-e7a50159a146-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.460097 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldq9m\" (UniqueName: \"kubernetes.io/projected/660e52e5-64b3-47d1-b593-e7a50159a146-kube-api-access-ldq9m\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.561188 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldq9m\" (UniqueName: \"kubernetes.io/projected/660e52e5-64b3-47d1-b593-e7a50159a146-kube-api-access-ldq9m\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.561257 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/660e52e5-64b3-47d1-b593-e7a50159a146-qdr-test-config\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.561284 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/660e52e5-64b3-47d1-b593-e7a50159a146-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.562584 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/660e52e5-64b3-47d1-b593-e7a50159a146-qdr-test-config\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.573532 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/660e52e5-64b3-47d1-b593-e7a50159a146-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.589261 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldq9m\" (UniqueName: \"kubernetes.io/projected/660e52e5-64b3-47d1-b593-e7a50159a146-kube-api-access-ldq9m\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.696798 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 18 00:25:07 crc kubenswrapper[5121]: I0218 00:25:07.213813 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 18 00:25:08 crc kubenswrapper[5121]: I0218 00:25:08.222781 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"660e52e5-64b3-47d1-b593-e7a50159a146","Type":"ContainerStarted","Data":"f322c4567ab42e23d34367a80478f86ad3587ed43060904bf7d2bf2008455d92"} Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.293951 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"660e52e5-64b3-47d1-b593-e7a50159a146","Type":"ContainerStarted","Data":"17f89fea5d066b2617d6a297a852feecfa82932e483fbb6ba32f1827aed606cb"} Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.314235 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.618736681 podStartE2EDuration="10.314209138s" podCreationTimestamp="2026-02-18 00:25:05 +0000 UTC" firstStartedPulling="2026-02-18 00:25:07.226570486 +0000 UTC m=+990.741028221" lastFinishedPulling="2026-02-18 00:25:14.922042943 +0000 UTC m=+998.436500678" observedRunningTime="2026-02-18 00:25:15.30804177 +0000 UTC m=+998.822499595" watchObservedRunningTime="2026-02-18 00:25:15.314209138 +0000 UTC m=+998.828666903" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.630287 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vj4t5"] Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.653369 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.659419 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.659630 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.659423 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.660151 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.660591 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.661875 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.669768 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vj4t5"] Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690039 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690124 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690146 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690202 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690309 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690463 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690499 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791626 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791738 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791771 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791802 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791909 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791950 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793068 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793256 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793443 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793712 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793904 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793960 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.821232 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.985696 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.072407 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.081913 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.091186 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.199488 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"curl\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.236369 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vj4t5"] Feb 18 00:25:16 crc kubenswrapper[5121]: W0218 00:25:16.239392 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940e8886_3e2e_46ea_b228_a4d1b058909f.slice/crio-5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb WatchSource:0}: Error finding container 5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb: Status 404 returned error can't find the container with id 5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.285426 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerStarted","Data":"5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb"} Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.301488 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"curl\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.322668 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"curl\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.430717 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.606331 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 18 00:25:17 crc kubenswrapper[5121]: I0218 00:25:17.296721 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188","Type":"ContainerStarted","Data":"9cdd6671ec3d94d40952a3c32a37e7f8629157efde03c71053bfe97a56aa4d45"} Feb 18 00:25:18 crc kubenswrapper[5121]: I0218 00:25:18.310238 5121 generic.go:358] "Generic (PLEG): container finished" podID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerID="8507d0cfae59ce3abe3771ca9e2fb07a2a5834e143ffbda509bdca00c1c4c8fa" exitCode=0 Feb 18 00:25:18 crc kubenswrapper[5121]: I0218 00:25:18.310335 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188","Type":"ContainerDied","Data":"8507d0cfae59ce3abe3771ca9e2fb07a2a5834e143ffbda509bdca00c1c4c8fa"} Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.531148 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.619689 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.631569 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l" (OuterVolumeSpecName: "kube-api-access-dsb5l") pod "a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" (UID: "a7b03f25-b4f9-4ccf-8d2e-03b352e2c188"). InnerVolumeSpecName "kube-api-access-dsb5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.721442 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.723965 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_a7b03f25-b4f9-4ccf-8d2e-03b352e2c188/curl/0.log" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.968719 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7plz2_37bc1d59-8b60-48c3-aabd-f9337333ef2b/prometheus-webhook-snmp/0.log" Feb 18 00:25:24 crc kubenswrapper[5121]: I0218 00:25:24.373945 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188","Type":"ContainerDied","Data":"9cdd6671ec3d94d40952a3c32a37e7f8629157efde03c71053bfe97a56aa4d45"} Feb 18 00:25:24 crc kubenswrapper[5121]: I0218 00:25:24.373981 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cdd6671ec3d94d40952a3c32a37e7f8629157efde03c71053bfe97a56aa4d45" Feb 18 00:25:24 crc kubenswrapper[5121]: I0218 00:25:24.374043 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:25 crc kubenswrapper[5121]: I0218 00:25:25.398307 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerStarted","Data":"5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103"} Feb 18 00:25:30 crc kubenswrapper[5121]: I0218 00:25:30.440270 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerStarted","Data":"fd838701cd40c66733f4729510708eb9a122433555e66de62afeceb349f0d48c"} Feb 18 00:25:30 crc kubenswrapper[5121]: I0218 00:25:30.482470 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" podStartSLOduration=1.501063933 podStartE2EDuration="15.482449973s" podCreationTimestamp="2026-02-18 00:25:15 +0000 UTC" firstStartedPulling="2026-02-18 00:25:16.241353244 +0000 UTC m=+999.755810979" lastFinishedPulling="2026-02-18 00:25:30.222739284 +0000 UTC m=+1013.737197019" observedRunningTime="2026-02-18 00:25:30.477555701 +0000 UTC m=+1013.992013476" watchObservedRunningTime="2026-02-18 00:25:30.482449973 +0000 UTC m=+1013.996907718" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.545131 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.545506 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.545565 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.546463 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.546568 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6" gracePeriod=600 Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.485991 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6" exitCode=0 Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.486061 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6"} Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.486484 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519"} Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.486505 5121 scope.go:117] "RemoveContainer" containerID="439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2" Feb 18 00:25:54 crc kubenswrapper[5121]: I0218 00:25:54.097468 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7plz2_37bc1d59-8b60-48c3-aabd-f9337333ef2b/prometheus-webhook-snmp/0.log" Feb 18 00:25:59 crc kubenswrapper[5121]: I0218 00:25:59.716604 5121 generic.go:358] "Generic (PLEG): container finished" podID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerID="5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103" exitCode=0 Feb 18 00:25:59 crc kubenswrapper[5121]: I0218 00:25:59.716772 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerDied","Data":"5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103"} Feb 18 00:25:59 crc kubenswrapper[5121]: I0218 00:25:59.717988 5121 scope.go:117] "RemoveContainer" containerID="5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.146900 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522906-hgbxw"] Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.148578 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerName="curl" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.148626 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerName="curl" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.148943 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerName="curl" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.156911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.159275 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522906-hgbxw"] Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.160362 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.162175 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.170173 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.217323 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"auto-csr-approver-29522906-hgbxw\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.319040 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"auto-csr-approver-29522906-hgbxw\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.352596 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"auto-csr-approver-29522906-hgbxw\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.494354 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.751573 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522906-hgbxw"] Feb 18 00:26:01 crc kubenswrapper[5121]: I0218 00:26:01.737004 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" event={"ID":"27e98045-8793-4239-ae6e-54ff007c2064","Type":"ContainerStarted","Data":"b204090bb33d92bffc7a43551fb76ccdc122441e471eb713e584745a4b067fe4"} Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.761342 5121 generic.go:358] "Generic (PLEG): container finished" podID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerID="fd838701cd40c66733f4729510708eb9a122433555e66de62afeceb349f0d48c" exitCode=0 Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.761778 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerDied","Data":"fd838701cd40c66733f4729510708eb9a122433555e66de62afeceb349f0d48c"} Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.773336 5121 generic.go:358] "Generic (PLEG): container finished" podID="27e98045-8793-4239-ae6e-54ff007c2064" containerID="4b8afd9f2027745ae23d87e7d030ba5e12a46b4b0b9aa4263286e99681437f8a" exitCode=0 Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.773439 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" event={"ID":"27e98045-8793-4239-ae6e-54ff007c2064","Type":"ContainerDied","Data":"4b8afd9f2027745ae23d87e7d030ba5e12a46b4b0b9aa4263286e99681437f8a"} Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.136859 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.143005 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.191580 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.191695 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.191729 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192571 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192660 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192696 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"27e98045-8793-4239-ae6e-54ff007c2064\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192719 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192806 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.200879 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q" (OuterVolumeSpecName: "kube-api-access-dwz8q") pod "27e98045-8793-4239-ae6e-54ff007c2064" (UID: "27e98045-8793-4239-ae6e-54ff007c2064"). InnerVolumeSpecName "kube-api-access-dwz8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.204993 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597" (OuterVolumeSpecName: "kube-api-access-st597") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "kube-api-access-st597". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.211604 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.212112 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.215565 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.222452 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.222678 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.233813 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294467 5121 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294531 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294554 5121 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294574 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294592 5121 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294609 5121 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294625 5121 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294642 5121 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.795276 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.795268 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerDied","Data":"5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb"} Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.795734 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.799268 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" event={"ID":"27e98045-8793-4239-ae6e-54ff007c2064","Type":"ContainerDied","Data":"b204090bb33d92bffc7a43551fb76ccdc122441e471eb713e584745a4b067fe4"} Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.799369 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b204090bb33d92bffc7a43551fb76ccdc122441e471eb713e584745a4b067fe4" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.799460 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:05 crc kubenswrapper[5121]: I0218 00:26:05.228288 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:26:05 crc kubenswrapper[5121]: I0218 00:26:05.240277 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:26:05 crc kubenswrapper[5121]: I0218 00:26:05.285286 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" path="/var/lib/kubelet/pods/6d8c4383-cf7d-4c99-badf-42f433b91870/volumes" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.039762 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vj4t5_940e8886-3e2e-46ea-b228-a4d1b058909f/smoketest-collectd/0.log" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.298384 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vj4t5_940e8886-3e2e-46ea-b228-a4d1b058909f/smoketest-ceilometer/0.log" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.537395 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-jpbx6_58efa647-6d57-485a-89c5-66d831cf05c5/default-interconnect/0.log" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.778276 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_3a752ce6-d6e6-4222-9c73-8f79a4272c55/bridge/2.log" Feb 18 00:26:07 crc kubenswrapper[5121]: I0218 00:26:07.096633 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_3a752ce6-d6e6-4222-9c73-8f79a4272c55/sg-core/0.log" Feb 18 00:26:07 crc kubenswrapper[5121]: I0218 00:26:07.387291 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_de3c7540-7b8d-4e77-968d-68b42aecf4df/bridge/2.log" Feb 18 00:26:07 crc kubenswrapper[5121]: I0218 00:26:07.706769 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_de3c7540-7b8d-4e77-968d-68b42aecf4df/sg-core/0.log" Feb 18 00:26:08 crc kubenswrapper[5121]: I0218 00:26:08.282837 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj_91bcc3e0-8b13-4cb5-a115-01265bb95b3a/bridge/1.log" Feb 18 00:26:08 crc kubenswrapper[5121]: I0218 00:26:08.616868 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj_91bcc3e0-8b13-4cb5-a115-01265bb95b3a/sg-core/0.log" Feb 18 00:26:08 crc kubenswrapper[5121]: I0218 00:26:08.974191 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_906f1c26-b94f-41a4-98f4-524412eb9029/bridge/2.log" Feb 18 00:26:09 crc kubenswrapper[5121]: I0218 00:26:09.255792 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_906f1c26-b94f-41a4-98f4-524412eb9029/sg-core/0.log" Feb 18 00:26:09 crc kubenswrapper[5121]: I0218 00:26:09.573780 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_0f0eb637-4674-4fad-bb8e-e0b7d5ac913b/bridge/2.log" Feb 18 00:26:09 crc kubenswrapper[5121]: I0218 00:26:09.899062 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_0f0eb637-4674-4fad-bb8e-e0b7d5ac913b/sg-core/0.log" Feb 18 00:26:13 crc kubenswrapper[5121]: I0218 00:26:13.238398 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-zh9kd_a9bb59e6-a92e-442e-87e6-b7331ba07de6/operator/0.log" Feb 18 00:26:13 crc kubenswrapper[5121]: I0218 00:26:13.549059 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7acc81c6-6ef1-4c1d-ac51-c020076734e6/prometheus/0.log" Feb 18 00:26:13 crc kubenswrapper[5121]: I0218 00:26:13.851975 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f3bc26d0-c80d-412d-9370-b821cdb7c2d7/elasticsearch/0.log" Feb 18 00:26:14 crc kubenswrapper[5121]: I0218 00:26:14.139211 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7plz2_37bc1d59-8b60-48c3-aabd-f9337333ef2b/prometheus-webhook-snmp/0.log" Feb 18 00:26:14 crc kubenswrapper[5121]: I0218 00:26:14.491426 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_36845eb3-f7ec-4a0f-81ca-6650cc34a86d/alertmanager/0.log" Feb 18 00:26:27 crc kubenswrapper[5121]: I0218 00:26:27.409325 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-gnq9d_24352f2e-20c2-4d2e-bd18-8fb703441b7b/operator/0.log" Feb 18 00:26:30 crc kubenswrapper[5121]: I0218 00:26:30.694107 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-zh9kd_a9bb59e6-a92e-442e-87e6-b7331ba07de6/operator/0.log" Feb 18 00:26:31 crc kubenswrapper[5121]: I0218 00:26:31.001162 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_660e52e5-64b3-47d1-b593-e7a50159a146/qdr/0.log" Feb 18 00:26:42 crc kubenswrapper[5121]: I0218 00:26:42.818893 5121 scope.go:117] "RemoveContainer" containerID="2772c03a3bd634ef4a9b0f93f7a4ca54d3598f6d92857ea841fed48a41f5f618" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.337734 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.338942 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-ceilometer" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339044 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-ceilometer" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339093 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-collectd" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339101 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-collectd" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339134 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27e98045-8793-4239-ae6e-54ff007c2064" containerName="oc" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339142 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e98045-8793-4239-ae6e-54ff007c2064" containerName="oc" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339268 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-collectd" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339290 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-ceilometer" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339304 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="27e98045-8793-4239-ae6e-54ff007c2064" containerName="oc" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.488833 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.488946 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.490757 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-lrqpp\"/\"default-dockercfg-85c9n\"" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.494109 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-lrqpp\"/\"openshift-service-ca.crt\"" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.494776 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-lrqpp\"/\"kube-root-ca.crt\"" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.579984 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.580223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.681467 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.681598 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.682252 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.715501 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.807007 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.989173 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:26:55 crc kubenswrapper[5121]: W0218 00:26:55.994314 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf78fc17_7be1_4f86_a7aa_dc6b3498c1e1.slice/crio-7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe WatchSource:0}: Error finding container 7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe: Status 404 returned error can't find the container with id 7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe Feb 18 00:26:56 crc kubenswrapper[5121]: I0218 00:26:56.257826 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerStarted","Data":"7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe"} Feb 18 00:27:03 crc kubenswrapper[5121]: I0218 00:27:03.327246 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerStarted","Data":"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f"} Feb 18 00:27:04 crc kubenswrapper[5121]: I0218 00:27:04.337132 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerStarted","Data":"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d"} Feb 18 00:27:34 crc kubenswrapper[5121]: I0218 00:27:34.544571 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:27:34 crc kubenswrapper[5121]: I0218 00:27:34.545339 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:27:51 crc kubenswrapper[5121]: I0218 00:27:51.148100 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-djfbc_efe976a0-6ea6-4283-8b7c-97caa4f2111b/control-plane-machine-set-operator/0.log" Feb 18 00:27:51 crc kubenswrapper[5121]: I0218 00:27:51.263605 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-hfw2k_005aa352-e543-4bfd-ba57-b2cb37eb98f6/machine-api-operator/0.log" Feb 18 00:27:51 crc kubenswrapper[5121]: I0218 00:27:51.306889 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-hfw2k_005aa352-e543-4bfd-ba57-b2cb37eb98f6/kube-rbac-proxy/0.log" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.145828 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lrqpp/must-gather-f5znv" podStartSLOduration=58.276304041 podStartE2EDuration="1m5.145801176s" podCreationTimestamp="2026-02-18 00:26:55 +0000 UTC" firstStartedPulling="2026-02-18 00:26:55.996303333 +0000 UTC m=+1099.510761068" lastFinishedPulling="2026-02-18 00:27:02.865800428 +0000 UTC m=+1106.380258203" observedRunningTime="2026-02-18 00:27:04.361944261 +0000 UTC m=+1107.876401996" watchObservedRunningTime="2026-02-18 00:28:00.145801176 +0000 UTC m=+1163.660258971" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.156752 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522908-5vgns"] Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.172963 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.174568 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522908-5vgns"] Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.179282 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.179579 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.179737 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.266686 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"auto-csr-approver-29522908-5vgns\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.369054 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"auto-csr-approver-29522908-5vgns\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.399267 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"auto-csr-approver-29522908-5vgns\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.503155 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.999458 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522908-5vgns"] Feb 18 00:28:01 crc kubenswrapper[5121]: W0218 00:28:01.003979 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda618abb0_765d_4bea_a66c_df0ca59df619.slice/crio-50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e WatchSource:0}: Error finding container 50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e: Status 404 returned error can't find the container with id 50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e Feb 18 00:28:01 crc kubenswrapper[5121]: I0218 00:28:01.285052 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522908-5vgns" event={"ID":"a618abb0-765d-4bea-a66c-df0ca59df619","Type":"ContainerStarted","Data":"50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e"} Feb 18 00:28:03 crc kubenswrapper[5121]: I0218 00:28:03.296488 5121 generic.go:358] "Generic (PLEG): container finished" podID="a618abb0-765d-4bea-a66c-df0ca59df619" containerID="4fa4fdc6c1a12fdaa840796c9add2b2cb68190d125123e63c8c4a08a83ec537c" exitCode=0 Feb 18 00:28:03 crc kubenswrapper[5121]: I0218 00:28:03.296561 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522908-5vgns" event={"ID":"a618abb0-765d-4bea-a66c-df0ca59df619","Type":"ContainerDied","Data":"4fa4fdc6c1a12fdaa840796c9add2b2cb68190d125123e63c8c4a08a83ec537c"} Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.545310 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.545807 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.554386 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.635259 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"a618abb0-765d-4bea-a66c-df0ca59df619\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.646904 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9" (OuterVolumeSpecName: "kube-api-access-s2wf9") pod "a618abb0-765d-4bea-a66c-df0ca59df619" (UID: "a618abb0-765d-4bea-a66c-df0ca59df619"). InnerVolumeSpecName "kube-api-access-s2wf9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.737631 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.935259 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-mkxwj_8dac9f2e-68b4-409b-9fd2-bfc0bd928235/cert-manager-controller/0.log" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.084419 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-9qb4h_b1211244-3ab3-496b-9610-d2c6d4943528/cert-manager-webhook/0.log" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.088474 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-n4bv6_244cd2fe-9d19-45ba-9d3c-2fa6d153f27c/cert-manager-cainjector/0.log" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.312437 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.312463 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522908-5vgns" event={"ID":"a618abb0-765d-4bea-a66c-df0ca59df619","Type":"ContainerDied","Data":"50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e"} Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.312899 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.622374 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.627172 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:28:07 crc kubenswrapper[5121]: I0218 00:28:07.283663 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" path="/var/lib/kubelet/pods/e811a594-9ca7-4167-807e-e39bd75b7912/volumes" Feb 18 00:28:19 crc kubenswrapper[5121]: I0218 00:28:19.700742 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-s7jq7_ac0aed84-6c11-41de-9f31-3a7b2a313944/prometheus-operator/0.log" Feb 18 00:28:19 crc kubenswrapper[5121]: I0218 00:28:19.799061 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d_5551a95c-fb98-465f-ba4f-3eacc393a47b/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:19 crc kubenswrapper[5121]: I0218 00:28:19.893494 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd_34785a14-a8e1-49c9-bcca-3996487db06f/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:20 crc kubenswrapper[5121]: I0218 00:28:20.010993 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-p6t4z_2277040f-ef0e-4742-a923-fff6ccf3e5aa/operator/0.log" Feb 18 00:28:20 crc kubenswrapper[5121]: I0218 00:28:20.087965 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6hzks_e476d06d-6937-425a-b4b9-ef90c4e141f5/perses-operator/0.log" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.544130 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.544618 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.544678 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.545302 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.545358 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519" gracePeriod=600 Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.756479 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/util/0.log" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.944093 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/util/0.log" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.946971 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.003171 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.168486 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.219604 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.222041 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/extract/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.376764 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.522090 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.536387 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539450 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519" exitCode=0 Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539524 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519"} Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539567 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"c3d9d582193e7e4195b0e4460b1abc7ca6d2cdfc92b48b41f1d065c10ff1e53a"} Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539591 5121 scope.go:117] "RemoveContainer" containerID="a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.589029 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.724024 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.727820 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.762009 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/extract/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.908609 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.055389 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.078836 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.115502 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.212053 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.275864 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.279625 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/extract/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.424018 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.563134 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.586441 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.647438 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.766089 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.819747 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/extract/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.821880 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.946111 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.087363 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.102942 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.103114 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.282944 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.310416 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.470295 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/registry-server/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.488710 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.622840 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.654317 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.668257 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.784985 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.802972 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.803817 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.814783 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.866932 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.886151 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.932813 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-kdn9c_2265e28f-7cec-4dde-b4c4-be79e7d2ccd2/marketplace-operator/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.068485 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/registry-server/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.114569 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-utilities/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.236518 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-content/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.252033 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-utilities/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.252158 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-content/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.411213 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-content/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.415449 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-utilities/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.634647 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/registry-server/0.log" Feb 18 00:28:42 crc kubenswrapper[5121]: I0218 00:28:42.985028 5121 scope.go:117] "RemoveContainer" containerID="b11f5a73cbf91d419fed64da70dfe6c9e158164e96434325df36174760c790eb" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.388334 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-s7jq7_ac0aed84-6c11-41de-9f31-3a7b2a313944/prometheus-operator/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.427336 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d_5551a95c-fb98-465f-ba4f-3eacc393a47b/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.442698 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd_34785a14-a8e1-49c9-bcca-3996487db06f/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.547827 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-p6t4z_2277040f-ef0e-4742-a923-fff6ccf3e5aa/operator/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.588675 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6hzks_e476d06d-6937-425a-b4b9-ef90c4e141f5/perses-operator/0.log" Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.041089 5121 generic.go:358] "Generic (PLEG): container finished" podID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" exitCode=0 Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.041245 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerDied","Data":"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f"} Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.042501 5121 scope.go:117] "RemoveContainer" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.645909 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lrqpp_must-gather-f5znv_df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/gather/0.log" Feb 18 00:29:38 crc kubenswrapper[5121]: I0218 00:29:38.993744 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:29:38 crc kubenswrapper[5121]: I0218 00:29:38.995206 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-lrqpp/must-gather-f5znv" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" containerID="cri-o://101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" gracePeriod=2 Feb 18 00:29:38 crc kubenswrapper[5121]: I0218 00:29:38.996934 5121 status_manager.go:895] "Failed to get status for pod" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" pod="openshift-must-gather-lrqpp/must-gather-f5znv" err="pods \"must-gather-f5znv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-lrqpp\": no relationship found between node 'crc' and this object" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.008554 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.427135 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lrqpp_must-gather-f5znv_df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/copy/0.log" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.428512 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.524046 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.525024 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.535054 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx" (OuterVolumeSpecName: "kube-api-access-xlvbx") pod "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" (UID: "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1"). InnerVolumeSpecName "kube-api-access-xlvbx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.576495 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" (UID: "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.627378 5121 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.627442 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.114207 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lrqpp_must-gather-f5znv_df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/copy/0.log" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.115115 5121 generic.go:358] "Generic (PLEG): container finished" podID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" exitCode=143 Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.115172 5121 scope.go:117] "RemoveContainer" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.115230 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.133517 5121 scope.go:117] "RemoveContainer" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.194043 5121 scope.go:117] "RemoveContainer" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" Feb 18 00:29:40 crc kubenswrapper[5121]: E0218 00:29:40.195752 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d\": container with ID starting with 101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d not found: ID does not exist" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.195804 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d"} err="failed to get container status \"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d\": rpc error: code = NotFound desc = could not find container \"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d\": container with ID starting with 101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d not found: ID does not exist" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.195832 5121 scope.go:117] "RemoveContainer" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:40 crc kubenswrapper[5121]: E0218 00:29:40.196209 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f\": container with ID starting with 7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f not found: ID does not exist" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.196243 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f"} err="failed to get container status \"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f\": rpc error: code = NotFound desc = could not find container \"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f\": container with ID starting with 7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f not found: ID does not exist" Feb 18 00:29:41 crc kubenswrapper[5121]: I0218 00:29:41.285412 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" path="/var/lib/kubelet/pods/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/volumes" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.147754 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522910-csvv9"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149186 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149298 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149366 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="gather" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149380 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="gather" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149425 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a618abb0-765d-4bea-a66c-df0ca59df619" containerName="oc" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149437 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a618abb0-765d-4bea-a66c-df0ca59df619" containerName="oc" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149617 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="gather" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149636 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a618abb0-765d-4bea-a66c-df0ca59df619" containerName="oc" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149646 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.159783 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522910-csvv9"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.159923 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.164389 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.164712 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.164855 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.247069 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.253740 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.257025 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.257042 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.264081 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.285703 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"auto-csr-approver-29522910-csvv9\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387704 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"auto-csr-approver-29522910-csvv9\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387788 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387821 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387900 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.409809 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"auto-csr-approver-29522910-csvv9\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.481333 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.489932 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.490191 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.490267 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.491010 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.507576 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.509868 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.582714 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.814307 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.820466 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.970413 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522910-csvv9"] Feb 18 00:30:00 crc kubenswrapper[5121]: W0218 00:30:00.977277 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod167070f2_72ad_4082_9db1_e89c473bb595.slice/crio-04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a WatchSource:0}: Error finding container 04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a: Status 404 returned error can't find the container with id 04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.315088 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522910-csvv9" event={"ID":"167070f2-72ad-4082-9db1-e89c473bb595","Type":"ContainerStarted","Data":"04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a"} Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.316940 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a9a31b1-f17e-43bc-b696-c9c002d88629" containerID="061a84f737dd9d7fe6a81a97c1902f398021276c97a43a1fa12f0da19d4453d9" exitCode=0 Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.317064 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" event={"ID":"3a9a31b1-f17e-43bc-b696-c9c002d88629","Type":"ContainerDied","Data":"061a84f737dd9d7fe6a81a97c1902f398021276c97a43a1fa12f0da19d4453d9"} Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.317135 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" event={"ID":"3a9a31b1-f17e-43bc-b696-c9c002d88629","Type":"ContainerStarted","Data":"7adbf0111e51a68b3a6984d27c73a224451776210f0f4b8507058ef2f1b99012"} Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.662631 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.757372 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"3a9a31b1-f17e-43bc-b696-c9c002d88629\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.757792 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"3a9a31b1-f17e-43bc-b696-c9c002d88629\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.757926 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"3a9a31b1-f17e-43bc-b696-c9c002d88629\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.758859 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a9a31b1-f17e-43bc-b696-c9c002d88629" (UID: "3a9a31b1-f17e-43bc-b696-c9c002d88629"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.763772 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm" (OuterVolumeSpecName: "kube-api-access-27fkm") pod "3a9a31b1-f17e-43bc-b696-c9c002d88629" (UID: "3a9a31b1-f17e-43bc-b696-c9c002d88629"). InnerVolumeSpecName "kube-api-access-27fkm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.764150 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a9a31b1-f17e-43bc-b696-c9c002d88629" (UID: "3a9a31b1-f17e-43bc-b696-c9c002d88629"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.860268 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.860505 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.860687 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.336759 5121 generic.go:358] "Generic (PLEG): container finished" podID="167070f2-72ad-4082-9db1-e89c473bb595" containerID="c3385bbf6952539702a4158338b53a7c26a289bfe3dd0d2ae24acbf56df164bc" exitCode=0 Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.336812 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522910-csvv9" event={"ID":"167070f2-72ad-4082-9db1-e89c473bb595","Type":"ContainerDied","Data":"c3385bbf6952539702a4158338b53a7c26a289bfe3dd0d2ae24acbf56df164bc"} Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.339737 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.339725 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" event={"ID":"3a9a31b1-f17e-43bc-b696-c9c002d88629","Type":"ContainerDied","Data":"7adbf0111e51a68b3a6984d27c73a224451776210f0f4b8507058ef2f1b99012"} Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.340202 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adbf0111e51a68b3a6984d27c73a224451776210f0f4b8507058ef2f1b99012" Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.608671 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.793897 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"167070f2-72ad-4082-9db1-e89c473bb595\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.800932 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl" (OuterVolumeSpecName: "kube-api-access-d9gtl") pod "167070f2-72ad-4082-9db1-e89c473bb595" (UID: "167070f2-72ad-4082-9db1-e89c473bb595"). InnerVolumeSpecName "kube-api-access-d9gtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.896699 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.364256 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.364341 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522910-csvv9" event={"ID":"167070f2-72ad-4082-9db1-e89c473bb595","Type":"ContainerDied","Data":"04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a"} Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.364407 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a" Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.682918 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.692102 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:30:07 crc kubenswrapper[5121]: I0218 00:30:07.295287 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" path="/var/lib/kubelet/pods/fb912abb-9dfb-4035-9eea-266ad0057af0/volumes" Feb 18 00:30:34 crc kubenswrapper[5121]: I0218 00:30:34.544487 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:30:34 crc kubenswrapper[5121]: I0218 00:30:34.545258 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:30:43 crc kubenswrapper[5121]: I0218 00:30:43.183447 5121 scope.go:117] "RemoveContainer" containerID="6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff" Feb 18 00:31:04 crc kubenswrapper[5121]: I0218 00:31:04.544457 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:31:04 crc kubenswrapper[5121]: I0218 00:31:04.545212 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.047966 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b9ccz"] Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.050406 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a9a31b1-f17e-43bc-b696-c9c002d88629" containerName="collect-profiles" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.050438 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9a31b1-f17e-43bc-b696-c9c002d88629" containerName="collect-profiles" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.050531 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="167070f2-72ad-4082-9db1-e89c473bb595" containerName="oc" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.050574 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="167070f2-72ad-4082-9db1-e89c473bb595" containerName="oc" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.050770 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a9a31b1-f17e-43bc-b696-c9c002d88629" containerName="collect-profiles" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.050790 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="167070f2-72ad-4082-9db1-e89c473bb595" containerName="oc" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.065544 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9ccz"] Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.065720 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.169660 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-catalog-content\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.169848 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt67p\" (UniqueName: \"kubernetes.io/projected/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-kube-api-access-qt67p\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.169959 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-utilities\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.271295 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-catalog-content\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.271356 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qt67p\" (UniqueName: \"kubernetes.io/projected/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-kube-api-access-qt67p\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.271383 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-utilities\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.271844 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-catalog-content\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.272259 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-utilities\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.307120 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt67p\" (UniqueName: \"kubernetes.io/projected/98d2cea7-3cd3-42f8-b950-f1af7d1d34ce-kube-api-access-qt67p\") pod \"redhat-operators-b9ccz\" (UID: \"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce\") " pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.387741 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:28 crc kubenswrapper[5121]: I0218 00:31:28.844119 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9ccz"] Feb 18 00:31:29 crc kubenswrapper[5121]: I0218 00:31:29.154565 5121 generic.go:358] "Generic (PLEG): container finished" podID="98d2cea7-3cd3-42f8-b950-f1af7d1d34ce" containerID="d3a83c25815e42d5ad06e64fc64357765f02eab982d4636d7dfcd3e7de5bc47b" exitCode=0 Feb 18 00:31:29 crc kubenswrapper[5121]: I0218 00:31:29.154634 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9ccz" event={"ID":"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce","Type":"ContainerDied","Data":"d3a83c25815e42d5ad06e64fc64357765f02eab982d4636d7dfcd3e7de5bc47b"} Feb 18 00:31:29 crc kubenswrapper[5121]: I0218 00:31:29.155004 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9ccz" event={"ID":"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce","Type":"ContainerStarted","Data":"11595e62975623722b0b028a3d6e10002036771789cb2beb9a9a9e738574c47c"} Feb 18 00:31:30 crc kubenswrapper[5121]: I0218 00:31:30.167261 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9ccz" event={"ID":"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce","Type":"ContainerStarted","Data":"5d7eac64e56da15e6fe5b69df39999d8b030a7b24acc6ff4d2288c9dea963853"} Feb 18 00:31:31 crc kubenswrapper[5121]: I0218 00:31:31.177024 5121 generic.go:358] "Generic (PLEG): container finished" podID="98d2cea7-3cd3-42f8-b950-f1af7d1d34ce" containerID="5d7eac64e56da15e6fe5b69df39999d8b030a7b24acc6ff4d2288c9dea963853" exitCode=0 Feb 18 00:31:31 crc kubenswrapper[5121]: I0218 00:31:31.177263 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9ccz" event={"ID":"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce","Type":"ContainerDied","Data":"5d7eac64e56da15e6fe5b69df39999d8b030a7b24acc6ff4d2288c9dea963853"} Feb 18 00:31:32 crc kubenswrapper[5121]: I0218 00:31:32.189152 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9ccz" event={"ID":"98d2cea7-3cd3-42f8-b950-f1af7d1d34ce","Type":"ContainerStarted","Data":"f01c5d0ac42cfaf53949965cae9b3fa4bdde1a736b6a098ebceae1caf6917955"} Feb 18 00:31:32 crc kubenswrapper[5121]: I0218 00:31:32.214093 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b9ccz" podStartSLOduration=3.42836581 podStartE2EDuration="4.214072588s" podCreationTimestamp="2026-02-18 00:31:28 +0000 UTC" firstStartedPulling="2026-02-18 00:31:29.157072793 +0000 UTC m=+1372.671530538" lastFinishedPulling="2026-02-18 00:31:29.942779571 +0000 UTC m=+1373.457237316" observedRunningTime="2026-02-18 00:31:32.210933383 +0000 UTC m=+1375.725391168" watchObservedRunningTime="2026-02-18 00:31:32.214072588 +0000 UTC m=+1375.728530333" Feb 18 00:31:34 crc kubenswrapper[5121]: I0218 00:31:34.545259 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:31:34 crc kubenswrapper[5121]: I0218 00:31:34.545841 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:31:34 crc kubenswrapper[5121]: I0218 00:31:34.545909 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:31:34 crc kubenswrapper[5121]: I0218 00:31:34.546964 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3d9d582193e7e4195b0e4460b1abc7ca6d2cdfc92b48b41f1d065c10ff1e53a"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:31:34 crc kubenswrapper[5121]: I0218 00:31:34.547063 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://c3d9d582193e7e4195b0e4460b1abc7ca6d2cdfc92b48b41f1d065c10ff1e53a" gracePeriod=600 Feb 18 00:31:35 crc kubenswrapper[5121]: I0218 00:31:35.230195 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="c3d9d582193e7e4195b0e4460b1abc7ca6d2cdfc92b48b41f1d065c10ff1e53a" exitCode=0 Feb 18 00:31:35 crc kubenswrapper[5121]: I0218 00:31:35.230269 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"c3d9d582193e7e4195b0e4460b1abc7ca6d2cdfc92b48b41f1d065c10ff1e53a"} Feb 18 00:31:35 crc kubenswrapper[5121]: I0218 00:31:35.230562 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"7051f09833b7169819d8c60d6024397c3279c84854d80b0a75e97ceaae2a9357"} Feb 18 00:31:35 crc kubenswrapper[5121]: I0218 00:31:35.230587 5121 scope.go:117] "RemoveContainer" containerID="1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519" Feb 18 00:31:38 crc kubenswrapper[5121]: I0218 00:31:38.388582 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:38 crc kubenswrapper[5121]: I0218 00:31:38.389254 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b9ccz" Feb 18 00:31:38 crc kubenswrapper[5121]: I0218 00:31:38.467466 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b9ccz" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145204166024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145204166017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145200770016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145200770015456 5ustar corecore